A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera
<p>Decision-level fusion framework.</p> "> Figure 2
<p>FMCW radar equation schematic block diagram.</p> "> Figure 3
<p>Schematic representation of lane detection algorithm.</p> "> Figure 4
<p>BEV of the lane to be detected. (<b>a</b>) RGB format. (<b>b</b>) Greyscale format. (<b>c</b>) ROI of the lanes to be detected. (<b>d</b>) Binary format.</p> "> Figure 5
<p>Histograms of lane lines to be detected.</p> "> Figure 6
<p>Visual representation of lane detection.</p> "> Figure 7
<p>Network structure of YOLOv5s.</p> "> Figure 8
<p>Different locations of targets.</p> "> Figure 9
<p>Distance fitting curve.</p> "> Figure 10
<p>Sampling method of frame.</p> "> Figure 11
<p>Schematic of coordinate system relationships. (<b>a</b>) Position of radar coordinate and camera coordinate. (<b>b</b>) Position of camera coordinate, image coordinate, and pixel coordinate.</p> "> Figure 12
<p>Camera calibration. (<b>a</b>) Camera calibration chessboard graph. (<b>b</b>) Corner extraction and correction of checkerboard.</p> "> Figure 13
<p>Hardware system for information fusion of radar and monocular camera.</p> "> Figure 14
<p>Target matching algorithm of radar and camera.</p> "> Figure 15
<p>Lane detection results. (<b>a</b>) Normal light. (<b>b</b>) Ground icon interference. (<b>c</b>) Ground shelter interference. (<b>d</b>) Weak light.</p> "> Figure 16
<p>Detection results of YOLOv5s. (<b>a</b>) Normal light. (<b>b</b>) Weak light. (<b>c</b>) Intense light. (<b>d</b>) Targets occluded by trees.</p> "> Figure 17
<p>Actual positions of targets.</p> "> Figure 18
<p>Radar detection results. (<b>a</b>) Original 2D point clouds of radar. (<b>b</b>) Valid point clouds after filtering. (<b>c</b>) DBSCAN target clustering. (<b>d</b>) Valid point clouds coalescing after clustering.</p> "> Figure 18 Cont.
<p>Radar detection results. (<b>a</b>) Original 2D point clouds of radar. (<b>b</b>) Valid point clouds after filtering. (<b>c</b>) DBSCAN target clustering. (<b>d</b>) Valid point clouds coalescing after clustering.</p> "> Figure 19
<p>The results of multi-sensor spatiotemporal calibration. (<b>a</b>) Without lane detection. (<b>b</b>) With lane detection.</p> "> Figure 20
<p>Information fusion results. (<b>a</b>) Without lane detection. (<b>b</b>) With lane detection.</p> "> Figure 21
<p>Weak light.</p> "> Figure 22
<p>Intense light.</p> "> Figure 23
<p>Normal light.</p> ">
Abstract
:1. Introduction
2. Information Fusion Framework
3. Realization of Fusion
3.1. FMCW Radar Equation
3.2. Video Data Pre-Processing
3.2.1. Lane Detection
3.2.2. YOLOv5 Target Detection Algorithm
3.3. Spatiotemporal Calibration
3.3.1. Temporal Calibration
3.3.2. Spatial Calibration
3.4. Target Matching
3.5. Evaluation Indicators
4. Experiments and Results
4.1. Experimental Platforms and Environments
4.2. Visual Detection Results
4.2.1. Validation of Lane Detection Algorithm
4.2.2. Validation of YOLOv5 Algorithm
4.3. Validation of Radar Target Detection Algorithm
4.4. Validation of Information Fusion
4.4.1. Comparison with Fusion without Lane Detection
4.4.2. Comparison with Other Related Studies
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Hassan, R.; Abdel-Rahim, A.; Hadhoud, R. Study of Road Traffic Accidents Cases admitted to Ain Shams University Hospitals during Years 2017 and 2018. Ain Shams J. Forensic Med. Clin. Toxicol. 2022, 38, 1–10. [Google Scholar] [CrossRef]
- Ahmed, S.K.; Mohammed, M.G.; Abdulqadir, S.O.; El-Kader, R.G.A.; El-Shall, N.A.; Chandran, D.; Rehman, M.E.U.; Dhama, K. Road traffic accidental injuries and deaths: A neglected global health issue. Health Sci. Rep. 2023, 6, e1240. [Google Scholar] [CrossRef]
- Shams, Z.; Naderi, H.; Nassiri, H. Assessing the effect of inattention-related error and anger in driving on road accidents among Iranian heavy vehicle drivers. IATSS Res. 2021, 45, 210–217. [Google Scholar] [CrossRef]
- Lu, J.; Peng, Z.; Yang, S.; Ma, Y.; Wang, R.; Pang, Z.; Feng, X.; Chen, Y.; Cao, Y. A review of sensory interactions between autonomous vehicles and drivers. J. Syst. Archit. 2023, 141, 102932. [Google Scholar] [CrossRef]
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-sensor fusion in automated driving: A survey. IEEE Access 2019, 8, 2847–2868. [Google Scholar] [CrossRef]
- Sushma, R.; Kumar, J.S. Autonomous vehicle: Challenges and implementation. J. Electr. Eng. Autom. 2022, 4, 100–108. [Google Scholar] [CrossRef]
- Kim, J.; Park, B.j.; Kim, J. Empirical Analysis of Autonomous Vehicle’s LiDAR Detection Performance Degradation for Actual Road Driving in Rain and Fog. Sensors 2023, 23, 2972. [Google Scholar] [CrossRef] [PubMed]
- Bhupathiraju, S.H.V.; Sheldon, J.; Bauer, L.A.; Bindschaedler, V.; Sugawara, T.; Rampazzi, S. EMI-LiDAR: Uncovering Vulnerabilities of LiDAR Sensors in Autonomous Driving Setting Using Electromagnetic Interference. In Proceedings of the 16th ACM Conference on Security and Privacy in Wireless and Mobile Networks, Guildford, UK, 29 May–1 June 2023; pp. 329–340. [Google Scholar]
- Giannaros, A.; Karras, A.; Theodorakopoulos, L.; Karras, C.; Kranias, P.; Schizas, N.; Kalogeratos, G.; Tsolis, D. Autonomous vehicles: Sophisticated attacks, safety issues, challenges, open topics, blockchain, and future directions. J. Cybersecur. Priv. 2023, 3, 493–543. [Google Scholar] [CrossRef]
- Gautam, S.; Kumar, A. Image-based automatic traffic lights detection system for autonomous cars: A review. Multimed. Tools Appl. 2023, 82, 26135–26182. [Google Scholar] [CrossRef]
- Sharma, V.K.; Dhiman, P.; Rout, R.K. Improved traffic sign recognition algorithm based on YOLOv4-tiny. J. Vis. Commun. Image Represent. 2023, 91, 103774. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, X.; Lan, X.; Su, T. Traffic target location estimation based on tensor decomposition in intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 2022, 25, 816–828. [Google Scholar] [CrossRef]
- Wang, X.; Guo, Y.; Wen, F.; He, J.; Truong, T.K. EMVS-MIMO radar with sparse Rx geometry: Tensor modeling and 2D direction finding. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 8062–8075. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, Z.; Qin, Y. On-road object detection and tracking based on radar and vision fusion: A review. IEEE Intell. Transp. Syst. Mag. 2021, 14, 103–128. [Google Scholar] [CrossRef]
- Bombini, L.; Cerri, P.; Medici, P.; Alessandretti, G. Radar-vision fusion for vehicle detection. In Proceedings of the International Workshop on Intelligent Transportation, Toronto, ON, Canada, 17–20 September 2006; Volume 65, p. 70. [Google Scholar]
- Chipengo, U.; Commens, M. A 77 GHz simulation study of roadway infrastructure radar signatures for smart roads. In Proceedings of the 2019 16th European Radar Conference (EuRAD), Paris, France, 2–4 October 2019; pp. 137–140. [Google Scholar]
- Hsu, Y.W.; Lai, Y.H.; Zhong, K.Q.; Yin, T.K.; Perng, J.W. Developing an on-road object detection system using monovision and radar fusion. Energies 2019, 13, 116. [Google Scholar] [CrossRef]
- Abbas, A.F.; Sheikh, U.U.; Al-Dhief, F.T.; Mohd, M.N.H. A comprehensive review of vehicle detection using computer vision. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2021, 19, 838–850. [Google Scholar] [CrossRef]
- Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791. [Google Scholar] [CrossRef]
- Liu, L.; Ouyang, W.; Wang, X.; Fieguth, P.; Chen, J.; Liu, X.; Pietikäinen, M. Deep learning for generic object detection: A survey. Int. J. Comput. Vis. 2020, 128, 261–318. [Google Scholar] [CrossRef]
- Ciaparrone, G.; Sánchez, F.L.; Tabik, S.; Troiano, L.; Tagliaferri, R.; Herrera, F. Deep learning in video multi-object tracking: A survey. Neurocomputing 2020, 381, 61–88. [Google Scholar] [CrossRef]
- Dimitrievski, M.; Jacobs, L.; Veelaert, P.; Philips, W. People tracking by cooperative fusion of radar and camera sensors. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 509–514. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- Lin, J.J.; Guo, J.I.; Shivanna, V.M.; Chang, S.Y. Deep Learning Derived Object Detection and Tracking Technology Based on Sensor Fusion of Millimeter-Wave Radar/Video and Its Application on Embedded Systems. Sensors 2023, 23, 2746. [Google Scholar] [CrossRef] [PubMed]
- YenIaydin, Y.; Schmidt, K.W. A lane detection algorithm based on reliable lane markings. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
- Sun, F.; Li, Z.; Li, Z. A traffic flow detection system based on YOLOv5. In Proceedings of the 2021 2nd International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Shanghai, China, 15–17 October 2021; pp. 458–464. [Google Scholar]
- Long, N.; Wang, K.; Cheng, R.; Yang, K.; Bai, J. Fusion of millimeter wave radar and RGB-depth sensors for assisted navigation of the visually impaired. In Proceedings of the Millimetre Wave and Terahertz Sensors and Technology XI, Berlin, Germany, 5 October 2018; Volume 10800, pp. 21–28. [Google Scholar]
- Zhong, Z.; Liu, S.; Mathew, M.; Dubey, A. Camera radar fusion for increased reliability in ADAS applications. Electron. Imaging 2018, 2018, 251–254. [Google Scholar]
- Lv, P.; Wang, B.; Cheng, F.; Xue, J. Multi-Objective Association Detection of Farmland Obstacles Based on Information Fusion of Millimeter Wave Radar and Camera. Sensors 2022, 23, 230. [Google Scholar] [CrossRef] [PubMed]
- Song, M.; Lim, J.; Shin, D.J. The velocity and range detection using the 2D-FFT scheme for automotive radars. In Proceedings of the 2014 4th IEEE International Conference on Network Infrastructure and Digital Content, Beijing, China, 19–21 September 2014; pp. 507–510. [Google Scholar]
- Yuan, Y.; Li, W.; Sun, Z.; Zhang, Y.; Xiang, H. Two-dimensional FFT and two-dimensional CA-CFAR based on ZYNQ. J. Eng. 2019, 2019, 6483–6486. [Google Scholar] [CrossRef]
- Lim, S.; Lee, S.; Kim, S.C. Clustering of detected targets using DBSCAN in automotive radar systems. In Proceedings of the 2018 19th International Radar Symposium (IRS), Bonn, Germany, 20–22 June 2018; pp. 1–7. [Google Scholar]
- Winkler, V. Range Doppler detection for automotive FMCW radars. In Proceedings of the 2007 European Radar Conference, Munich, Germany, 10–12 October 2007; pp. 166–169. [Google Scholar]
- Mukherjee, A.; Sinha, A.; Choudhury, D. A novel architecture of area efficient FFT algorithm for FPGA implementation. ACM SIGARCH Comput. Archit. News 2016, 42, 1–6. [Google Scholar] [CrossRef]
- Barnhart, B.L. The Hilbert-Huang Transform: Theory, Applications, Development. Ph.D. Thesis, The University of Iowa, Iowa, IA, USA, 2011. [Google Scholar]
- Xiaoling, Y.; Weixin, J.; Haoran, Y. Traffic sign recognition and detection based on YOLOv5. Inf. Technol. Informatiz. 2021, 4, 28–30. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Guo, X.p.; Du, J.s.; Gao, J.; Wang, W. Pedestrian detection based on fusion of millimeter wave radar and vision. In Proceedings of the 2018 International Conference on Artificial Intelligence and Pattern Recognition, New York, NY, USA, 18 August 2018; pp. 38–42. [Google Scholar]
- Mo, C.; Li, Y.; Zheng, L.; Ren, Y.; Wang, K.; Li, Y.; Xiong, Z. Obstacles detection based on millimetre-wave radar and image fusion techniques. In Proceedings of the IET International Conference on Intelligent and Connected Vehicles (ICV 2016), Chongqing, China, 22–23 September 2016; pp. 1–6. [Google Scholar]
- Cai, G.; Wang, X.; Shi, J.; Lan, X.; Su, T.; Guo, Y. Vehicle Detection Based on Information Fusion of mmWave Radar and Monocular Vision. Electronics 2023, 12, 2840. [Google Scholar] [CrossRef]
- Su, Y.; Wang, X.; Lan, X. Co-prime Array Interpolation for DOA Estimation Using Deep Matrix Iterative Network. IEEE Trans. Instrum. Meas. 2024, 1. [Google Scholar] [CrossRef]
- Wang, C.; Yeh, I.; Liao, H. YOLOv9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
Software and Device | Version | Function |
---|---|---|
Operating system | Windows11 | – |
CPU | i5-11400 | – |
GPU | NVIDIA GeForce RTX 3060 | – |
CUDA | 12.3 | – |
Pytorch | 11.3 | – |
Python | 3.8 | – |
Pycharm | 2023 | Running the YOLO algorithms |
MATLAB | R2022b | Running the lane detection, radar, and fusion algorithms |
Camera | Hewlett-Packard (HP) 1080p | – |
Radar | AWR2243 [42] | – |
Model | Params/M | FLOPs/G | /% | FPS |
---|---|---|---|---|
YOLOv5s | 7.2 | 16.5 | 56.8 | 106 |
YOLOv7-tiny | 6.2 | 13.9 | 51.3 | 123 |
YOLOv8s | 11.2 | 28.6 | 61.8 | 101 |
YOLOv9s | 7.1 | 26.4 | 63.4 | – |
Environment | Algorithm | Precision | Recall | F1 |
---|---|---|---|---|
Normal light | Camera-only detection | 86.91% | 88.44% | 0.88 |
Decision-level fusion | 86.81% | 90.95% | 0.89 | |
Fusion (ours) | 96.83% | 99.75% | 0.98 | |
Weak light | Camera-only detection | 79.18% | 86.93% | 0.83 |
Decision-level fusion | 82.50% | 82.91% | 0.83 | |
Fusion (ours) | 96.32% | 98.74% | 0.98 | |
Intense light | Camera-only detection | 69.10% | 97.59% | 0.81 |
Decision-level fusion | 62.03% | 97.81% | 0.76 | |
Fusion (ours) | 96.15% | 98.46% | 0.97 |
Algorithm | Inference Time/s | Running Memory/GB |
---|---|---|
Fusion (ours) | 0.94 | 1.26 |
Decision-level fusion | 0.90 | 0.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Y.; Wang, X.; Wu, X.; Lan, X.; Su, T.; Guo, Y. A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera. Remote Sens. 2024, 16, 2225. https://doi.org/10.3390/rs16122225
Yang Y, Wang X, Wu X, Lan X, Su T, Guo Y. A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera. Remote Sensing. 2024; 16(12):2225. https://doi.org/10.3390/rs16122225
Chicago/Turabian StyleYang, Yanqiu, Xianpeng Wang, Xiaoqin Wu, Xiang Lan, Ting Su, and Yuehao Guo. 2024. "A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera" Remote Sensing 16, no. 12: 2225. https://doi.org/10.3390/rs16122225