[go: up one dir, main page]

Skip to main content

Advertisement

Log in

Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Pedestrian detection appears to be an integral part of a vast array of vision-based technologies, ranging from item recognition and monitoring via surveillance cameras to, more recently, driverless cars or autonomous vehicles. Moreover, due to the rapid development of Convolutional Neural Networks (CNN) for object identification, pedestrian detection has reached a very high level of performance in dataset training and evaluation environment in autonomous vehicles. In order to attain object identification and pedestrian detection, a sensor fusion mechanism named Fully Convolutional Neural networks for LIDAR–camera fusion is proposed, which combines Lidar data with multiple camera images to provide an optimal solution for pedestrian detection. The system model proposes a separate algorithm for image fusion in pedestrian detection. Further, architecture and framework are designed for Fully Convolutional Neural networks for LIDAR–camera fusion for Pedestrian detection. In addition, a fully functional algorithm for Pedestrians detection and identification is proposed to precisely locate the pedestrian in the range of 10 to 30 m. Finally, the proposed model’s performance is evaluated based on multiple parameters such as Precision, Sensitivity, Accuracy, etc.; hence the proposed system model has proven to be effective comparatively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Data availability

Available on request.

Code availability

Not applicable.

References

  1. Alfred Daniel J, Karthik S, Newlin Rajkumar M (2021, ISSN 0141-9331) Optimizing spectral efficiency based on poisson queuing model for procuring cognitive intelligence in vehicular communication. Elsevier Microprocess Microsyst 82:103819. https://doi.org/10.1016/j.micpro.2020.103819

    Article  Google Scholar 

  2. Caltagirone L, Bellone M, Svensson L, Wahde M (2019) LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot Auton Syst 111:125–131

    Article  Google Scholar 

  3. Daniel A, Paul A, Ahmad A, Rho S (2016) Cooperative intelligence of vehicles for intelligent transportation systems (ITS). Wirel Pers Commun 87(2):461–484

    Article  Google Scholar 

  4. Daniel A, Subburathinam K, Paul A, Rajkumar N, Rho S (2017) Big autonomous vehicular data classifications: towards procuring intelligence in ITS. Veh Commun 9:306–312

    Google Scholar 

  5. Daniel A, Subburathinam K, Muthu BA, Rajkumar N, Kadry S, Mahendran RK, Pandian S (2020) Procuring cooperative intelligence in autonomous vehicles for object detection through data fusion approach. IET Intell Transp Syst 14(11):1410–1417

    Article  Google Scholar 

  6. De Silva V, Roche J, Kondoz A (2018) Robust fusion of LiDAR and wide-angle camera data for autonomous mobile robots. Sensors 18(8):2730

  7. Girshick R, Donahue J, Darrell T, Malik J (2016) Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Pattern Anal Mach Intell 38(1):142–158. https://doi.org/10.1109/TPAMI.2015.2437384

    Article  Google Scholar 

  8. Han T-T, Ge SS (2015) Styled-velocity flocking of autonomous vehicles: a systematic design. IEEE Trans Autom Control 60:2015–2030. https://doi.org/10.1109/TAC.2015.2400664

    Article  MathSciNet  MATH  Google Scholar 

  9. Han X, Wang H, Lu J, Zhao C (2017) Road detection based on the fusion of lidar and image data. Int J Adv Robot Syst 14(6):1729881417738102

    Article  Google Scholar 

  10. Hasan I, Liao S, Li J, Akram SU, Shao L (2022) Pedestrian detection: domain generalization, CNNs, transformers and beyond. ArXiv, abs/2201.03176

  11. Hou YL, Song Y, Hao X, Shen Y, Qian M, Chen H (2018) Multispectral pedestrian detection based on deep convolutional neural networks. Infrared Phys Technol 94:69–77. https://doi.org/10.1016/j.infrared.2018.08.029

    Article  Google Scholar 

  12. Kim H, Liu B, Goh CY, Lee S, Myung H (2017) Robust vehicle localization using entropy-weighted particle filter-based data fusion of vertical and road intensity information for a large scale urban area. IEEE Robotics Autom Lett 2(3):1518–1524. https://doi.org/10.1109/LRA.2017.2673868

    Article  Google Scholar 

  13. Liang M, Yang B, Wang S, Urtasun R (2018) Deep continuous fusion for multi-sensor 3d object detection. In: Proceedings of the European conference on computer vision (ECCV), pp. 641-656

  14. Liu Z, Zhang W, Lin S, Quek TQS (2017) Heterogeneous sensor data fusion By deep multimodal encoding. IEEE J Sel Top Signal Process 11(3):479–491. https://doi.org/10.1109/JSTSP.2017.2679538

    Article  Google Scholar 

  15. Paul A, Chilamkurti N, Daniel A, Rho S (2016) Intelligent vehicular networks and communications: fundamentals, architectures, and solutions. Morgan Freeman Publisher, Elsevier SN - 0128095466

  16. Paul A, Daniel A, Ahmad A, Rho S (2017) Cooperative cognitive intelligence for internet of vehicles. IEEE Syst J 11(3):1249–1258. https://doi.org/10.1109/JSYST.2015.2411856

    Article  Google Scholar 

  17. Tianrui L, Tania S (2018) Faster R-CNN for Robust Pedestrian Detection Using Semantic Segmentation Network. Front Neurorobot 12, ISSN=1662–5218. https://doi.org/10.3389/fnbot.2018.00064

  18. Wang H, Lou X, Cai Y, Li Y, Chen L (2019) Real-time vehicle detection algorithm based on vision and LiDAR point cloud fusion. Journal of Sensors 2019:1–9

    Google Scholar 

  19. Wei P, Cagle L, Reza T, Ball J, Gafford J (2018) LiDAR and camera detection fusion in a real-time industrial multi-sensor collision avoidance system. Electronics. 7(6):84. https://doi.org/10.3390/electronics7060084

    Article  Google Scholar 

  20. Wen L-H, Jo K-H (2021) Fast and accurate 3D object detection for Lidar-camera-based autonomous vehicles using one shared voxel-based backbone. IEEE Access 9:22080–22089. https://doi.org/10.1109/ACCESS.2021.3055491

    Article  Google Scholar 

  21. Wu T-E, Tsai C-C, Guo J-I (2017) LiDAR/camera sensor fusion technology for pedestrian detection. 2017 Asia-Pacific signal and information processing association annual summit and conference (APSIPA ASC), pp. 1675-1678. https://doi.org/10.1109/APSIPA.2017.8282301

  22. Zhao G, Xiao X, Yuan J, Wahng G (2014) Fusion of 3D-LIDAR and camera data for scene parsing. J Vis Commun Image Represent 25(1):165–183, ISSN 1047-3203. https://doi.org/10.1016/j.jvcir.2013.06.008

    Article  Google Scholar 

  23. Zhao X, Sun P, Xu Z, Min H, Yu H (2020) Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications. IEEE Sensors J 20(9):4901–4913. https://doi.org/10.1109/JSEN.2020.2966034

    Article  Google Scholar 

Download references

Funding

Authors did not receive any funding.

Author information

Authors and Affiliations

Authors

Contributions

Alfred Daniel J, Chandru Vignesh C, Bala Anand Muthu, is responsible for designing the framework, analyzing the performance, validating the results, and writing the article. Sivaparthipan CB, Carlos Enrique Montenegro Marin, is responsible for collecting the information required for the framework, provision of software, critical review, and administering the process.

Corresponding author

Correspondence to C Chandru Vignesh.

Ethics declarations

Ethics approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Not Applicable.

Conflict of interest

Authors do not have any conflicts.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alfred Daniel, J., Chandru Vignesh, C., Muthu, B.A. et al. Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle. Multimed Tools Appl 82, 25107–25130 (2023). https://doi.org/10.1007/s11042-023-14417-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-14417-x

Keywords

Navigation