Deep Learning Based Protective Equipment Detection on Offshore Drilling Platform
<p>The overall pipeline of the personal protective equipment detection method on an offshore drilling platform.</p> "> Figure 2
<p>Feature fusion mechanism of the YOLOv3. <math display="inline"><semantics> <msub> <mi>C</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>5</mn> </msub> </semantics></math> are the three different scale feature maps output by YOLOv3 through the backbone network. <math display="inline"><semantics> <msub> <mi>P</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>P</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mn>5</mn> </msub> </semantics></math> are the feature maps used for detection after feature fusion.</p> "> Figure 3
<p>The diagram of the proposed improved YOLOv3 model. <math display="inline"><semantics> <msub> <mi>C</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>5</mn> </msub> </semantics></math> are the three different scale feature maps outputs through the backbone network. ASFF-1, ASFF-2 and ASFF-3 are new feature maps obtained by adaptive spatial feature fusion. <math display="inline"><semantics> <msub> <mi>P</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>P</mi> <mn>4</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mn>5</mn> </msub> </semantics></math> are the feature maps used for detection.</p> "> Figure 4
<p>The pipeline of regional multi-person pose estimation.</p> "> Figure 5
<p>The human body model with 17 key points.</p> "> Figure 6
<p>Locate the area to be detected (head and workwear uniform): (<b>a</b>) the key points of the human body; (<b>b</b>) locate the head area; (<b>c</b>) locate the workwear uniform area.</p> "> Figure 7
<p>Identification of protective equipment based on improved ResNet50. (<b>a</b>) Training workwear uniforms recognition model; (<b>b</b>) training helmet recognition model.</p> "> Figure 8
<p>Exemplar detection results for the YOLOv3 and the improved YOLOv3 method in the case of small objects. The upper left corner of subgraphs (<b>a</b>,<b>b</b>) is the time information: 8:23:13 a.m. on Wednesday, 13 September 2017. And the lower right corner is the location: CB6C-wellhead.</p> "> Figure 9
<p>Exemplar detection results for the YOLOv3 and the improved YOLOv3 method in the case of occlusions. The upper left corner of subgraphs (<b>a</b>,<b>b</b>) is the time information: 3:10:51 p.m. on Saturday, 23 September 2017. And the lower right corner is the location: CB-01-wellhead.</p> "> Figure 10
<p>Comparative results of IOU in helmet region localizations.</p> "> Figure 11
<p>Exemplar results of the safety helmet detection. The upper right corner of subgraphs (<b>a</b>–<b>c</b>) is the time information: 11:59:19 a.m. on Friday, 18 January 2019. And the lower left corner is the location information: CB30B low voltage distribution room. The lower right corner of subgraphs (<b>d</b>–<b>f</b>) is the picture source: Qilu photo group. The upper right corner of subgraphs (<b>g</b>–<b>i</b>) is the time information: 2:50:06 p.m. on Tuesday, 5 March 2019. And the lower left corner is the location information: CB30B low voltage distribution room.</p> "> Figure 12
<p>Results of IOU in work clothes region localization experiment based on our method.</p> "> Figure 13
<p>Losses function visualizations on the training set. (<b>a</b>) The loss curve for training the helmet detection model; (<b>b</b>) the loss curve for training workwear uniforms detection model.</p> "> Figure 14
<p>Exemplar detections of protective equipment for offshore drilling platform (conforming to protective equipment wearing regulations). The upper left corner of subgraph (<b>a</b>) is the time information: Monday, 27 November 2017, 8:53:57 a.m. And the lower right corner is the location information: the center three-phase separator. The upper left corner of subgraph (<b>b</b>) is the time information: Thursday, 14 September 2017, 7:23:20 a.m. And the lower right corner is the location information: the center three-export pump area. The upper right corner of subgraph (<b>c</b>) is the time information: Monday, 7 January 2019, 4:47:55 p.m. And the lower left corner is the location information: the CB30A10KV high pressure chamber.</p> "> Figure 15
<p>Exemplar detections of protective equipment for offshore drilling platform (violating protective equipment wearing regulations). The upper right corner of subgraph (the rightmost picture) is the time information: Friday, 18 January 2019, 11:59:19 a.m. And the lower right corner is the location information: CB30B low voltage distribution room.</p> "> Figure 16
<p>Exemplar detections of protective equipment with complex human postures. The upper left corner of subgraph (<b>a</b>) is the time information: Friday, 22 September 2017, 3:19:41 p.m. And the lower right corner is the location information: CB1G-technological process. The upper left corner of subgraph (<b>b</b>) is the time information: Friday, 11 August 2017, 4:48:30 p.m. And the lower right corner is the location information: CB1C-wellhead. The upper right corner of subgraph (<b>c</b>) is the time information: Friday, 11 August 2017, 4:40:35 p.m. And the lower left corner is the location information: CB1C-wellhead.</p> "> Figure 17
<p>Detection results of protective equipment for small workers. The upper left corner of subgraph (<b>a</b>) is the time information: Wednesday, 28 June 2017, 10:16:36 a.m. And the lower right corner is the location information: CB22D technology. The upper right corner of subgraph (<b>b</b>) is the time information: 7 March 2019, 7:55:23 a.m. And the lower right corner is the location information: CB4E-side A of living roof. The upper left corner of subgraph (<b>c</b>) is the time information: Wednesday, 13 September 2017, 8:23:14 a.m. And the lower left corner is the location information: CB6C-wellhead.</p> "> Figure 18
<p>Exemplar detections of protective equipment with occlusions. The upper left corner of subgraph (<b>a</b>) is the time information: Friday, 28 April 2017, 4:13:22 p.m. And the lower right corner is the location information: CB11EH technology. The upper right corner of subgraph (<b>b</b>) is the time information: 23 September 2017, 3:10:52 p.m. And the lower right corner is the location information: CB-01-wellhead. The upper left corner of subgraph (<b>c</b>) is the time information: Saturday, 23 September 2017, 3:33:08 p.m. And the lower left corner is the location information: CB253-wellhead.</p> ">
Abstract
:1. Introduction
- With the complex background of offshore drilling platforms, we modify the YOLOv3 algorithm and use random erasing [22] for data augmentation to ease the problem of a lack of occluded workers. This improves the recognition accuracy for small-scale personnel and occluded personnel.
- We use a pose estimation algorithm to obtain the key points of the human body and locate the area of interest (head area and workwear uniform area) based on the spatial relations among the key points.
- A deep transfer learning method based on modified ResNet50 is introduced to train the protection equipment recognition model, which can effectively avoid the impact of network training caused by an insufficient sample size of protective equipment images.
2. Related Works
3. The Proposed PPED Method
Algorithm 1 Detecting the personal protective equipment of an offshore drilling platform. |
Input:M: Image containing workers of offshore drilling platform; Using improved YOLOv3 to get the worker’s bounding box:; Taking P as the input of RMPE, the coordinates of 17 key-points are obtained: ; Taking K as the input and using the seven-positioning method to locate the head area: H; Taking K as the input and using the four-positioning method to locate the work clothing area: W; Taking H as the input to the helmet recognition method to get the result: helmet or no helmet; Taking W as the input to the safety clothing recognition method to get the result: safety clothing or no safety clothing; if helmet and safety clothing then the worker wears helmet and safety clothing end if if helmet and no safety clothing then the worker wears helmet but no safety clothing end if if no helmet and safety clothing then the worker wears no helmet but safety clothing end if if no helmet and safety clothing then the worker wears helmet but no safety clothing end if if no helmet and safety clothing then the worker wears no helmet and no safety clothing end if Output: |
3.1. Improving YOLOv3 for Candidate Detection
3.1.1. Fusion Factors Calculation
3.1.2. Feature Reshaping
3.1.3. Adaptive Fusion
3.2. Areas of Interest Detection
3.2.1. Human Body Key Points Extraction
3.2.2. Head Area Detection
3.2.3. Work Wear Uniform Areas Detection
3.3. Personal Protective Equipment Recognition
4. Experimental Results
- CPU: Intel E5-2609 v2, 8 processors;
- Clock frequency: 2.5 GHz;
- Memory: 32 GB;
- Graphics card: Nvidia GTX 1080 Ti, 2 cards.
- Windows 10 operating system;
- Pycharm software development platform;
- Pytorch deep-learning framework.
4.1. Candidate Person Detection
4.2. Area of Interest Detection
4.3. Protective Equipment Detection
4.4. Comparison with Related Methods
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Rubaiyat, A.H.M.; Toma, T.T.; Kalantari-Khandani, M.; Rahman, S.A.; Chen, L.; Ye, Y.; Pan, C.S. Automatic detection of helmet uses for construction safety. In Proceedings of the 2016 IEEE/WIC/ACM International Conference on Web Intelligence Workshops (WIW), Omaha, NE, USA, 13–16 October 2016; pp. 135–142. [Google Scholar]
- Luo, X.; Li, H.; Yang, X.; Yu, Y.; Cao, D. Capturing and understanding workers’ activities in far-field surveillance videos with deep action recognition and bayesian nonparametric learning. Comput. Aided Civ. Infrastruct. Eng. 2019, 34, 333–351. [Google Scholar] [CrossRef]
- Shen, J.; Xiong, X.; Li, Y.; He, W.; Li, P.; Zheng, X. Detecting safety helmet wearing on construction sites with bounding-box regression and deep transfer learning. Comput. Aided Civ. Infrastruct. Eng. 2021, 36, 180–196. [Google Scholar] [CrossRef]
- Wang, H.; Hu, Z.; Guo, Y.; Yang, Z.; Zhou, F.; Xu, P. A real-time safety helmet wearing detection approach based on csyolov3. Appl. Sci. 2020, 10, 6732. [Google Scholar] [CrossRef]
- Long, X.; Cui, W.; Zheng, Z. Safety helmet wearing detection based on deep learning. In Proceedings of the 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chengdu, China, 15–17 March 2019; pp. 2495–2499. [Google Scholar]
- Wu, H.; Zhao, J. An intelligent vision-based approach for helmet identification for work safety. Comput. Ind. 2018, 100, 267–277. [Google Scholar] [CrossRef]
- Park, M.-W.; Brilakis, I. Construction worker detection in video frames for initializing vision trackers. Autom. Constr. 2012, 28, 15–25. [Google Scholar] [CrossRef]
- Siebert, F.W.; Lin, H. Detecting motorcycle helmet use with deep learning. Accid. Anal. Prev. 2020, 134, 105319. [Google Scholar] [CrossRef] [PubMed]
- Vishnu, C.; Singh, D.; Mohan, C.K.; Babu, S. Detection of motorcyclists without helmet in videos using convolutional neural network. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 3036–3041. [Google Scholar]
- Doungmala, P.; Klubsuwan, K. Helmet wearing detection in thailand using haar like feature and circle hough transform on image processing. In Proceedings of the 2016 IEEE International Conference on Computer and Information Technology (CIT), Nadi, Fiji, 8–10 December 2016; pp. 611–614. [Google Scholar]
- Li, J.; Liu, H.; Wang, T.; Jiang, M.; Wang, S.; Li, K.; Zhao, X. Safety helmet wearing detection based on image processing and machine learning. In Proceedings of the 2017 Ninth International Conference on Advanced Computational Intelligence (ICACI), Doha, Qatar, 4–6 February 2017; pp. 201–205. [Google Scholar]
- Wen, C.-Y. The safety helmet detection technology and its application to the surveillance system. J. Forensic Sci. 2004, 49, 1–11. [Google Scholar] [CrossRef]
- Chiverton, J. Helmet presence classification with motorcycle detection and tracking. IET Intell. Transp. Syst. 2012, 6, 259–269. [Google Scholar] [CrossRef]
- Waranusast, R.; Bundon, N.; Timtong, V.; Tangnoi, C.; Pattanathaburt, P. Machine vision techniques for motorcycle safety helmet detection. In Proceedings of the 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013), Wellington, New Zealand, 27–29 November 2013; pp. 35–40. [Google Scholar]
- Dahiya, K.; Singh, D.; Mohan, C.K. Automatic detection of bike-riders without helmet using surveillance videos in real-time. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 3046–3051. [Google Scholar]
- Bo, Y.; Huan, Q.; Huan, X.; Rong, Z.; Hongbin, L.; Kebin, M.; Weizhong, Z.; Lei, Z. Helmet detection under the power construction scene based on image analysis. In Proceedings of the 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 19–20 October 2019; pp. 67–71. [Google Scholar]
- Fang, Q.; Li, H.; Luo, X.; Ding, L.; Luo, H.; Rose, T.M.; An, W. Detecting non-hardhat-use by a deep learning method from far-field surveillance videos. Autom. Constr. 2018, 85, 1–9. [Google Scholar] [CrossRef]
- Nath, N.D.; Behzadan, A.H.; Paal, S.G. Deep learning for site safety: Real-time detection of personal protective equipment. Autom. Constr. 2020, 112, 103085. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Fang, H.-S.; Xie, S.; Tai, Y.-W.; Lu, C. Rmpe: Regional multi-person pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2334–2343. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13001–13008. [Google Scholar]
- Hu, Z.; Yan, H.; Lin, X. Clothing segmentation using foreground and background estimation based on the constrained delaunay triangulation. Pattern Recognit. 2008, 41, 1581–1592. [Google Scholar] [CrossRef]
- Gallagher, A.C.; Chen, T. Clothing cosegmentation for recognizing people. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- Stricker, M.; Dimai, A. Spectral covariance and fuzzy regions for image indexing. Mach. Vis. Appl. 1997, 10, 66–73. [Google Scholar] [CrossRef]
- Li, X. Image retrieval based on perceptive weighted color blocks. Pattern Recognit. Lett. 2003, 24, 1935–1941. [Google Scholar] [CrossRef]
- Yang, W.; Luo, P.; Lin, L. Clothing co-parsing by joint image segmentation and labeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3182–3189. [Google Scholar]
- Yamaguchi, K.; Kiapour, M.H.; Ortiz, L.E.; Berg, T.L. Parsing clothing in fashion photographs. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3570–3577. [Google Scholar]
- Bo, Y.; Fowlkes, C.C. Shape-based pedestrian parsing. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2265–2272. [Google Scholar]
- Jin, L.; Liu, G. An approach on image processing of deep learning based on improved ssd. Symmetry 2021, 13, 495. [Google Scholar] [CrossRef]
- Gong, Y.; Yu, X.; Ding, Y.; Peng, X.; Zhao, J.; Han, Z. Effective fusion factor in fpn for tiny object detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual Conference, 5–9 January 2021; pp. 1160–1168. [Google Scholar]
- Liu, S.; Huang, D.; Wang, Y. Learning spatial fusion for single-shot object detection. arXiv 2019, arXiv:1911.09516. [Google Scholar]
- Wang, G.; Wang, K.; Lin, L. Adaptively connected neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1781–1790. [Google Scholar]
- Qiang, Z.; Yao, Y.; Zhou, D.; Liu, R. Motion key-frame extraction by using optimized t-stochastic neighbor embedding. Symmetry 2015, 7, 395–411. [Google Scholar]
- He, Y.; Zhu, C.; Wang, J.; Savvides, M.; Zhang, X. Bounding box regression with uncertainty for accurate object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2888–2897. [Google Scholar]
- Zou, Z.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. arXiv 2019, arXiv:1905.05055. [Google Scholar]
Number of Samples (Sheets) | YOLOv3 | Our Method |
---|---|---|
5000 | ||
10,000 | ||
15,000 | ||
20,000 | ||
22,000 |
Method | Accuracy (%) |
---|---|
Jie, L. et al. | |
Shen et al. | |
Ours |
Method | Accuracy (%) |
---|---|
Park et al. | |
Ours |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gong, F.; Ji, X.; Gong, W.; Yuan, X.; Gong, C. Deep Learning Based Protective Equipment Detection on Offshore Drilling Platform. Symmetry 2021, 13, 954. https://doi.org/10.3390/sym13060954
Gong F, Ji X, Gong W, Yuan X, Gong C. Deep Learning Based Protective Equipment Detection on Offshore Drilling Platform. Symmetry. 2021; 13(6):954. https://doi.org/10.3390/sym13060954
Chicago/Turabian StyleGong, Faming, Xiaofeng Ji, Wenjuan Gong, Xiangbing Yuan, and Chenyu Gong. 2021. "Deep Learning Based Protective Equipment Detection on Offshore Drilling Platform" Symmetry 13, no. 6: 954. https://doi.org/10.3390/sym13060954