Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations
<p>Image captured by the rescue unmanned aerial vehicle (UAV): (Left) divided into equally <span class="html-italic">S</span> × <span class="html-italic">S</span> sized grid cells, (Top and Bottom) predicting <span class="html-italic">B</span> bounding boxes, confidence scores and <span class="html-italic">C</span> conditional class probabilities, (Right) leading to final encoded detections.</p> "> Figure 2
<p>Convolutional neural networks (CNN) model’s architecture.</p> "> Figure 3
<p>Neural Network’s training performance curve.</p> "> Figure 4
<p>Metrics results for different input resolutions.</p> "> Figure 5
<p>Humans in open waters (left column) before and (right column) after detection procedure.</p> "> Figure 6
<p>(<b>a</b>) The fully autonomous rescue UAV and (<b>b</b>) the embedded graphics processing unit (GPU) platform Nvidia Jetson TX1.</p> "> Figure 7
<p>Autonomous Rescue System’s Operation Flowchart.</p> "> Figure 8
<p>Rescue UAV’s camera field of view (FoV) (<b>a</b>) The proportion between sides <span class="html-italic">N</span>, <span class="html-italic">M</span> and (<b>b</b>) speeds <span class="html-italic">t<sub>x</sub></span>, <span class="html-italic">t<sub>y</sub></span> on <span class="html-italic">x</span>, <span class="html-italic">y</span> axes.</p> "> Figure 8 Cont.
<p>Rescue UAV’s camera field of view (FoV) (<b>a</b>) The proportion between sides <span class="html-italic">N</span>, <span class="html-italic">M</span> and (<b>b</b>) speeds <span class="html-italic">t<sub>x</sub></span>, <span class="html-italic">t<sub>y</sub></span> on <span class="html-italic">x</span>, <span class="html-italic">y</span> axes.</p> "> Figure 9
<p>The life-ring’s horizontal displacement related to the UAV’s release altitude under several wind speeds.</p> ">
Abstract
:1. Introduction
2. Related Research
3. Deep Learning
4. Human Detection
- S × S: is the sized grid cells dimensions
- B: is the number of bounding boxes and
- C: is the number of networks’ labeled classes
- Classification Loss: Errors in the prediction’s accuracy
- Localization Loss: Errors between predicted boundary box and ground truth
- Confidence Loss: Errors in the object’s appearance in the box
5. Implementation on the Autonomous UAV
- m is the life-ring’s mass (Kg),
- A is the life-ring’s horizontal cross-sectional area (m2),
- Ρ is the air density (kg/m3),
- CD is the drag coefficient,
- FL is the horizontal force (N),
- CL is the lifting coefficient
- Fw is the wind force (N)
- A1 is the vertical life-ring’s cross-sectional area (m2)
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
UAVs | Unmanned Aerial Vehicles |
SAR | Search and Rescue |
GNSS | Global Navigation Satelite System |
ROLFER | Robotic Lifeguard For Emergency Rescue |
PPP | Precise Point Positioning |
CNNs | Convolutional Neural Networks |
T-CNNs | Temporal Convolutional Neural Networks |
mAP | mean Average Precision |
UAS | Unmanned Aerial System |
SVM | Support Vector Machine |
FCNN | Fully Convolutional Neural Network |
IoU | Intersection Over Union |
CPU | Central Processing Unit |
GPU | Graphics Processing Unit |
FPS | Frames Per Second |
SP | Supervised Person |
GS | Ground Station |
ASW | Android Smart Watch |
FOV | Field Of View |
References
- Valavanis, K.; Vachtsevanos, G. UAV Applications. In Handbook of Unmanned Aerial Vehicles; Valavanis, K., Vachtsevanos, G., Eds.; Springer: Berlin, Germany, 2015; Volume 3, pp. 2639–2641. [Google Scholar] [CrossRef]
- De Cubber, G.; Doroftei, D.; Rudin, K.; Berns, K.; Matos, A.; Serrano, D.; Sanchez, J.; Govindaraj, S.; Bedkowski, J.; Roda, R.; et al. Introduction to the use of robotic tools for search and rescue. In Search and Rescue Robotics—From Theory to Practice; IntechOpen: London, UK, 2017; pp. 1–17. [Google Scholar] [CrossRef]
- Polka, M.; Ptak, S.; Kuziora, L.; Kuczynska, A. The use of unmanned aerial vehicles by urban search and rescue groups. In Drones-Applications; Dekoulis, G., Ed.; IntechOpen: London, UK, 2017; pp. 84–95. [Google Scholar] [CrossRef]
- Marques, M.M.; Lobo, V.; Batista, R.; Oliveira, J.; Aguiar, A.P.; Silva, J.E.; de Sousa, J.B.; Nunes, M.D.F.; Ribeiro, R.A.; Bernardino, A.; et al. An unmanned aircraft system for maritime operations: The sense and avoid subsystem with software-in-the-loop evaluation. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
- Seguin, C.; Blaquière, G.; Loundou, A.; Michelet, P.; Markarian, T. Unmanned aerial vehicles (drones) to prevent drowning. Resuscitation 2018, 127, 63–67. [Google Scholar] [CrossRef] [PubMed]
- Tomotani, J. Using unmanned aerial vehicles in search operations. J. Geek Stud. 2015, 2, 41–53. [Google Scholar]
- Yeong, S.P.; King, L.M.; Dol, S.S. A review on marine search and rescue operations using unmanned aerial vehicles. Int. J. Mech. Aerosp. Ind. Mech. Manuf. Eng. 2015, 9, 396–399. [Google Scholar]
- Grogan, S.; Pellerin, R.; Gamache, M. The use of unmanned aerial vehicles and drones in search and rescue operations–A survey. In Proceedings of the PROLOG 2018–the 4th Edition at the Logistics Institute, Hull, UK, 28–29 June 2018. [Google Scholar]
- Lygouras, E.; Gasteratos, A.; Tarchanidis, K. ROLFER: An innovative proactive platform to reserve swimmer’s safety. In Proceedings of the 4th International Conference on Information Systems for Crisis Response and Management in Mediterranean Countries (ISCRAMed 2017), Xanthi, Greece, 18–20 October 2017. [Google Scholar]
- Lygouras, E.; Gasteratos, A.; Tarchanidis, K.; Mitropoulos, A. ROLFER: A fully autonomous aerial rescue support system. Microprocess. Microsyst. 2018, 61, 32–42. [Google Scholar] [CrossRef]
- Chen, B.; Gao, C.; Liu, Y.; Sun, P. Real-time precise point positioning with a Xiaomi MI 8 android smartphone. Sensors 2019, 19, 2835. [Google Scholar] [CrossRef] [PubMed]
- Robustelli, U.; Baiocchi, V.; Pugliano, G. Assessment of dual frequency GNSS observations from a Xiaomi Mi 8 android smartphone and positioning performance analysis. Electronics 2019, 8, 91. [Google Scholar] [CrossRef]
- Elmezayen, A.; El-Rabbany, A. Precise Point Positioning Using World’s First Dual-Frequency GPS/GALILEO Smartphone. Sensors 2019, 19, 2593. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Deng, L.; Yu, D. Deep Learning: Methods and applications. Found. Trends Signal Process. 2014, 7, 197–387. [Google Scholar] [CrossRef]
- Lea, C.; Flynn, M.; Vidal, R.; Reiter, A.; Hager, G. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
- Sabu, E.; Suresh, K. Object detection from video using temporal convolutional network. In Proceedings of the IEEE Recent Advances in Intelligent Computational Systems (RAICS), Trivandrum, India, 6–8 December 2018. [Google Scholar]
- Nair, N.; Thomas, C.; Jayagopi, D. Human activity recognition using temporal convolutional network. In Proceedings of the 5th international Workshop on Sensor-Based Activity Recognition and Interaction (iWOAR ’18), Berlin, Germany, 20–21 September 2018. [Google Scholar]
- Zhongang, C.; Cunjun, Y.; Quang-Cuong, P. 3D Convolution on RGB-D point clouds for accurate model-free object pose estimation. arXiv 2018, arXiv:1812.11284. [Google Scholar]
- Rui, H.; Chen, C.; Mubarak, S. An end-to-end 3D convolutional neural network for action detection and segmentation in videos. arXiv 2017, arXiv:1712.01111. [Google Scholar]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 35, 495–502. [Google Scholar] [CrossRef]
- Carrio, A.; Sampedro, C.; Rodriguez-Ramos, A.; Campoy, P. A review of deep learning methods and applications for unmanned aerial vehicles. J. Sens. 2017, 2017, 1–13. [Google Scholar] [CrossRef]
- Radovic, M.; Adarkwa, O.; Wang, Q. Object recognition in aerial images using convolutional neural networks. J. Imaging 2017, 3, 21. [Google Scholar] [CrossRef]
- Kyrkou, C.; Plastiras, G.; Theocharides, T.; Venieris, S.; Bouganis, C. DroNet: Efficient convolutional neural network detector for real-time UAV applications. In Proceedings of the IEEE 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 19–23 March 2018. [Google Scholar]
- Hongguang, L.; Yang, S.; Baochang, Z.; Yufeng, W. Superpixel-based feature for aerial image scene recognition. Sensors 2018, 18, 156. [Google Scholar] [CrossRef]
- Santos, N.; Lobo, V.; Bernardino, A. 3D model-based estimation for UAV tracking. In Proceedings of the OCEANS, 2018 MTS/IEEE, Charleston, SC, USA, 22–25 October 2018. [Google Scholar]
- Babis, L.; Karakasis, E.; Amanatiadis, A.; Gasteratos, A. Can speedup assist accuracy? An on-board GPU-accelerated image georeference method for UAVs. In Proceedings of the 10th International Conference on Computer Vision Systems (ICVS 2015), Copenhagen, Denmark, 6–9 July 2015. [Google Scholar]
- Giitsidis, T.; Karakasis, E.; Gasteratos, A.; Sirakoulis, G. Human and fire detection from high altitude UAV images. In Proceedings of the 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, Finland, 4–6 March 2015. [Google Scholar]
- Amanatiadis, A.; Karakasis, E.G.; Bampis, L.; Giitsidis, T.; Panagiotou, P.; Sirakoulis, G.C.; Gasteratos, A.; Tsalides, P.; Goulas, A.; Yakinthos, K. The HCUAV project: Electronics and software development for medium altitude remote sensing. In Proceedings of the 12th IEEE International Symposium on Safety, Security, and Rescue Robotics, Hokkaido, Japan, 27–30 October 2014. [Google Scholar]
- Saponara, S. Sensing and connection systems for assisted and autonomous driving and unmanned vehicles. Sensors 2018, 18, 1999. [Google Scholar] [CrossRef]
- Petritoli, E.; Leccese, F.; Ciani, L. Reliability and maintenance analysis of unmanned aerial vehicles. Sensors 2018, 18, 3171. [Google Scholar] [CrossRef]
- Konovalenko, I.; Kuznetsova, E.; Miller, A.; Miller, B.; Popov, A.; Shepelev, D.; Stepanyan, K. New approaches to the integration of navigation systems for autonomous unmanned vehicles (UAV). Sensors 2018, 18, 3010. [Google Scholar] [CrossRef]
- Chongyang, L.; Yalin, D.; Ming, Z.; Jihong, X.; Mengyang, L.; Qihui, L. Vehicle detection in aerial images using a fast oriented region search and the vector of locally aggregated descriptors. Sensors 2019, 19, 3294. [Google Scholar] [CrossRef]
- Xiang, X.; Zhai, M.; Lv, N.; El Saddik, A. Vehicle counting based on vehicle detection and tracking from aerial videos. Sensors 2018, 18, 2560. [Google Scholar] [CrossRef]
- Saqib, M.; Khan, S.; Sharma, N.; Blumenstein, M. A study on detecting drones using deep convolutional neural networks. In Proceedings of the 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy, 29 August–1 September 2017. [Google Scholar]
- Opromolla, R.; Fasano, G.; Accardo, D. A vision-based approach to UAV detection and tracking in cooperative applications. Sensors 2018, 18, 3391. [Google Scholar] [CrossRef]
- Maher, A.; Taha, H.; Zhang, B. Realtime multi-aircraft tracking in aerial scene with deep orientation. J. Real-Time Image Process. 2018, 15, 495–507. [Google Scholar] [CrossRef]
- Farlik, J.; Kratky, M.; Casar, J.; Stary, V. Multispectral detection of commercial unmanned aerial vehicles. Sensors 2019, 19, 1517. [Google Scholar] [CrossRef]
- Jin, R.; Jiang, J.; Qi, Y.; Lin, D.; Song, T. Drone detection and pose estimation using relational graph networks. Sensors 2019, 19, 1479. [Google Scholar] [CrossRef]
- Ichim, L.; Popescu, D. Road detection and segmentation from aerial images using a CNN based system. In Proceedings of the 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018. [Google Scholar]
- Boonpook, W.; Tan, Y.; Ye, Y.; Torteeka, P.; Torsri, K.; Dong, S. A deep learning approach on building detection from unmanned aerial vehicle-based images in riverbank monitoring. Sensors 2018, 18, 3921. [Google Scholar] [CrossRef]
- Song, W.; Zhong, B.; Sun, X. Building corner detection in aerial images with fully convolutional networks. Sensors 2019, 19, 1915. [Google Scholar] [CrossRef]
- Byunghyun, K.; Soojin, C. Automated vision-based detection of cracks on concrete surfaces using a deep learning technique. Sensors 2018, 18, 3452. [Google Scholar] [CrossRef]
- Suk-Ju, H.; Yunhyeok, H.; Sang-Yeon, K.; Ah-Yeong, L.; Ghiseok, K. Application of deep-learning methods to bird detection using unmanned aerial vehicle imagery. Sensors 2019, 19, 1651. [Google Scholar] [CrossRef]
- Rivas, A.; Chamoso, P.; González-Briones, A.; Corchado, J. Detection of cattle using drones and convolutional neural networks. Sensors 2018, 18, 2048. [Google Scholar] [CrossRef]
- Dang, L.; Hassan, S.; Suhyeon, I.; Sangaiah, A.; Mehmood, I.; Rho, S.; Seo, S.; Moon, H. UAV based wilt detection system via convolutional neural networks. Sustain. Comput. Inform. Syst. 2018, 2018. [Google Scholar] [CrossRef]
- Tayara, H.; Chong, K. Object detection in very high-resolution aerial images using one-stage densely connected feature pyramid network. Sensors 2018, 18, 3341. [Google Scholar] [CrossRef]
- Din, A.; Bona, B.; Morrissette, J.; Hussain, M.; Violante, M.; Naseem, F. Embedded low power controller for autonomous landing of UAV using artificial neural network. In Proceedings of the FIT: 2012 10th International Conference on Frontiers of Information Technology, Islamabad, Pakistan, 17–19 December 2012. [Google Scholar]
- Falanga, D.; Zanchettin, A.; Simovic, A.; Delmerico, J.; Scaramucca, D. Vision-based autonomous quadrotor landing on a moving platform. In Proceedings of the 15th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China, 11–13 October 2017. [Google Scholar]
- Alenezi, M.; Almeshal, A. A Vision-based neural network controller for the autonomous landing of a quadrotor on moving targets. Robotics 2018, 7, 71. [Google Scholar] [CrossRef]
- Adam, A.; Elmaleeh, M.; Mahmoud, D. A smart neural network based algorithm for landing control of autonomous unmanned aerial vehicle. Int. J. Adv. Res. Sci. Eng. 2017, 6, 1175–1188. [Google Scholar]
- AlDahoul, N.; Sabri, A.; Mansoor, A. Real-time human detection for aerial captured video sequences via deep models. Comput. Intell. Neurosci. 2018, 2018, 1639561. [Google Scholar] [CrossRef]
- Nikouei, S.; Chen, Y.; Song, S.; Xu, R.; Choi, B.; Faughnan, T. Real-time human detection as an edge service enabled by a lightweight CNN. In Proceedings of the IEEE International Conference on Edge Computing (EDGE), San Francisco, CA, USA, 2–7 July 2018. [Google Scholar]
- De Oliveira, D.; Wehrmeister, M. Using deep learning and low-cost RGB and thermal cameras to detect pedestrians in aerial images captured by multirotor UAV. Sensors 2018, 18, 2244. [Google Scholar] [CrossRef]
- Tzelepi, M.; Tefas, A. Human crowd detection for drone flight safety using convolutional neural networks. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017. [Google Scholar]
- Awad, F.; Shamroukh, R. Human detection by robotic urban search and rescue using image processing and neural networks. Int. J. Intell. Sci. 2014, 4, 39–53. [Google Scholar] [CrossRef]
- Bejiga, M.; Zeggada, A.; Nouffidj, A.; Melgani, F. A convolutional neural network approach for assisting avalanche search and rescue operations with UAVs imagery. Remote Sens. 2017, 9, 100. [Google Scholar] [CrossRef]
- Gabrlik, P.; Janata, P.; Zalud, L.; Harcarik, J. Towards automatic UAS-based snow-field monitoring for microclimate research. Sensors 2019, 19, 1945. [Google Scholar] [CrossRef]
- Tijtgat, N.; Van Ranst, W.; Volckaert, B.; Goedeme, T.; De Turck, F. Embedded Real-Time Object Detection for a UAV Warning System. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Al-Kaff, A.; Gómez-Silva, M.J.; Moreno, F.M.; de la Escalera, A.; Armingol, J.M. An appearance-based tracking algorithm for aerial search and rescue purposes. Sensors 2019, 19, 652. [Google Scholar] [CrossRef]
- Hrabia, C.; Hessler, A.; Xu, Y.; Seibert, J.; Brehmer, J.; Albayrak, S. EffFeu project: Towards mission-guided application of drones in safety and security environments. Sensors 2019, 19, 973. [Google Scholar] [CrossRef]
- Pham, H.; La, H.; Seifer, D.; Nguyen, L. Reinforcement learning for UAVs autonomous navigation. In Proceedings of the 16th IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Philadelphia, PA, USA, 6–8 August 2018. [Google Scholar]
- Niroui, F.; Zhang, K.; Kashino, Z.; Nejat, G. Deep reinforcement learning robot for search and Rescue operations: Exploration in unknown cluttered environments. IEEE Robot. Autom. Mag. 2019, 4, 610–617. [Google Scholar] [CrossRef]
- Sampredo, C.; Rodriguez-Ramos, A.; Bavle, H.; Carrio, A.; Puente, P.; Campoy, P. A fully autonomous aerial robot for search and rescue applications in indoor environments using learning- based techniques. J. Int. Robot. Syst. 2018, 2018, 1–27. [Google Scholar] [CrossRef]
- Nielsen, M. Neural Networks and Deep Learning. Available online: http://neuralnetworksanddeeplearning.com (accessed on 15 July 2019).
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the 13th European Conference on Computer Vision (EECV 2014), Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolo9000: Better, faster, stronger. arXiv 2016, arXiv:1612.08242. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2016, arXiv:1804.02767. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common objects in context. arXiv 2014, arXiv:1405.0312. [Google Scholar]
- Redmon, J. Darknet: Open Source Neural Networks in c. Available online: http://pjreddie.com/darknet/ (accessed on 15 July 2019).
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
Specifications | |
---|---|
Rotors | 6 |
Weight | 2700 g (Excluding Battery) |
Brushless Motors | 5008-kV 400 (RPM/V) |
Motors Max. Current | 30 A |
Payload Capability | 5 Kg |
Frame Type | Folding Umbrella |
Frame Material | Lightweight Frame, Portable, Carbon Fiber |
Landing Gear | Electric Folding |
Wheel Base | 96 cm |
Battery Type | LiPo/22.2 V/16,000 mAh/30 C |
Rotor Size | 1855 High-end Carbon Fiber |
Brushless ESC | 40 A/6 S |
Autopilot | PX4 (168 MHz, Cortex M4F CPU, 256 KB RAM, 2 MB Flash) |
Camera | GoPro 1920 × 1080 HD |
Hover Power | 1800 W |
Hover Time (max) | 25 min |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lygouras, E.; Santavas, N.; Taitzoglou, A.; Tarchanidis, K.; Mitropoulos, A.; Gasteratos, A. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations. Sensors 2019, 19, 3542. https://doi.org/10.3390/s19163542
Lygouras E, Santavas N, Taitzoglou A, Tarchanidis K, Mitropoulos A, Gasteratos A. Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations. Sensors. 2019; 19(16):3542. https://doi.org/10.3390/s19163542
Chicago/Turabian StyleLygouras, Eleftherios, Nicholas Santavas, Anastasios Taitzoglou, Konstantinos Tarchanidis, Athanasios Mitropoulos, and Antonios Gasteratos. 2019. "Unsupervised Human Detection with an Embedded Vision System on a Fully Autonomous UAV for Search and Rescue Operations" Sensors 19, no. 16: 3542. https://doi.org/10.3390/s19163542