ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios
<p>ACD-Net: Abnormal crew detection network.</p> "> Figure 2
<p>Four types of images with distinct features: (<b>a</b>) image with uneven lighting and significant brightness variations; (<b>b</b>) image with local overexposure, underexposure, or blurring; (<b>c</b>) image with a cluttered background and a small proportion of crew images; (<b>d</b>) image with severe occlusions and overlaps between crew and equipment.</p> "> Figure 3
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p> "> Figure 4
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p> "> Figure 4 Cont.
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p> "> Figure 5
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the Neck (PAN); (<b>e</b>) input 2 of the Neck (PAN); (<b>f</b>) input 3 of the Neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p> "> Figure 6
<p>YOLOv5s feature visualization and recognition effect diagram: (<b>a</b>) original image; (<b>b</b>) the C3 model before SPPF; (<b>c</b>) SPPF; (<b>d</b>) input 1 of the neck (PAN); (<b>e</b>) input 2 of the neck (PAN); (<b>f</b>) input 3 of the neck (PAN); (<b>g</b>) YOLOv5s detection diagram.</p> "> Figure 7
<p>The structures of C3 module, TransformerBlock, and the C3-TransformerBlock module.</p> "> Figure 8
<p>Added CBAM schematic diagram in the feature fusion network.</p> "> Figure 9
<p>Comparison of loss function effects: (<b>a</b>) original image; (<b>b</b>) IoU; (<b>c</b>) CIoU.</p> "> Figure 10
<p>The architecture of YOLO-TRCA.</p> "> Figure 11
<p>CFA: crew identity recognition process.</p> "> Figure 12
<p>Facial coordinate diagram.</p> "> Figure 13
<p>Yaw rotation.</p> "> Figure 14
<p>Diagram of the crew identity recognition process.</p> "> Figure 15
<p>Partial images of the dataset: (<b>a</b>) not wearing a life jacket: nolifevast; (<b>b</b>) smoke; (<b>c</b>) not wearing work clothes: notrainlifevast; (<b>d</b>) not wearing a shirt: nocoat; (<b>e</b>) normal: lifevast.</p> "> Figure 16
<p>Comparison of detection results: (<b>a</b>) original image; (<b>b</b>) original image; (<b>c</b>) original image; (<b>d</b>) original image; (<b>e</b>) original image; (<b>f</b>) YOLOv5s; (<b>g</b>) YOLOv5s; (<b>h</b>) YOLOv5s; (<b>i</b>) YOLOv5s; (<b>j</b>) YOLOv5s; (<b>k</b>) proposed method; (<b>l</b>) proposed method; (<b>m</b>) proposed method; (<b>n</b>) proposed method; (<b>o</b>) proposed method.</p> "> Figure 17
<p>Features visualization of the network: (<b>a</b>) original image; (<b>b</b>) C3; (<b>c</b>) before adding CBAM1; (<b>d</b>) before adding CBAM2; (<b>e</b>) before adding CBAM3; (<b>f</b>) original image; (<b>g</b>) C3-TransformerBlock; (<b>h</b>) after adding CBAM1; (<b>i</b>) after adding CBAM2; (<b>j</b>) after adding CBAM3; (<b>k</b>) original image; (<b>l</b>) C3; (<b>m</b>) before adding CBAM1; (<b>n</b>) before adding CBAM2; (<b>o</b>) before adding CBAM3; (<b>p</b>) original image; (<b>q</b>) C3-TransformerBlock; (<b>r</b>) after adding CBAM1; (<b>s</b>) after adding CBAM2; (<b>t</b>) after adding CBAM3.</p> "> Figure 18
<p>Marine equipment layout diagram: (<b>a</b>) overall picture; (<b>b</b>) partial view.</p> "> Figure 19
<p>Algorithm effect and software design: (<b>a</b>) abnormal behavior detection of crew members in monitoring; (<b>b</b>) abnormal behavior detection of crew members in monitoring; (<b>c</b>) capturing and identifying abnormal crew; (<b>d</b>) abnormal crew identification record; (<b>e</b>) abnormal crew identity recognition results; (<b>f</b>) abnormal crew identity recognition results.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Methodology
3.1. Crew Anomaly Behavior Detection
3.1.1. Comparison of Related Detection Models
3.1.2. Analysis of Crew Abnormal Behaviors Detection Issues Based on YOLOv5
3.1.3. Feature Extraction Network Based on C3-TransformerBlock
3.1.4. Feature Fusion Network Based on a CBAM Attention Mechanism
3.1.5. Loss Function
3.1.6. Improved Overall Network Structure
3.2. Video Sequence-Based Crew Identity Recognition
3.2.1. Filter: Fast Face Quality Assessment Algorithm
- 1.
- Fast Face Pose Estimation;
- 2.
- Image Blur and Contrast Calculation;
3.2.2. Crew Identity Recognition Method
Algorithm 1 Crew Identity Recognition Algorithm for Video Sequences |
Input: Output: Abnormal behavior category of crew members , crew number , time , image , name
return None |
4. Experimental Results and Analysis
4.1. Crew Abnormal Behavior Detection Experiment
4.1.1. Experimental Dataset
4.1.2. Comparative Experiments
4.1.3. Ablation Experiment
4.2. Comparative Experiment on Crew Identity Recognition
4.3. Actual Testing on Board
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
(r,k) | P | R | [email protected] | [email protected]:0.95 |
(32,7) | 94.6% | 92.0% | 92.5% | 76.6% |
(16,7) | 95.6% | 91.7% | 93.2% | 76.9% |
(8,7) | 94.2% | 91.5% | 92.2% | 76.7% |
(32,3) | 94.5% | 91.2% | 92.4% | 76.6% |
(16,3) | 95.3% | 91.5% | 92.6% | 76.6% |
(8,3) | 95.2% | 91.8% | 92.9% | 76.2% |
References
- EMSA, European Maritime Safety Agency. Annual Overview of Marine Casualties and Incidents 2021. 2021. Available online: https://www.emsa.europa.eu/publications/download/6955/4266/23.html (accessed on 5 October 2022).
- Javier, S.-B.; Imanol, B.-I.; Iranzu, S.; Maruri, M.M.d.l.M. Human error in marine accidents: Is the crew normally to blame? Marit. Transp. Res. 2021, 2, 100016. [Google Scholar]
- Liu, K.; Pei, D.; Zhang, S.; Zeng, X.; Zheng, K.; Li, C.; Chen, M. WiCrew: Gait-based Crew Identification for Cruise Ships Using Commodity WiFi. IEEE Internet Things J. 2022, 10, 6960–6972. [Google Scholar] [CrossRef]
- Halomoan, J.; Ramli, K.; Sudiana, D.; Gunawan, T.S.; Salman, M. A New ECG Data Processing Approach to Developing an Accurate Driving Fatigue Detection Framework with Heart Rate Variability Analysis and Ensemble Learning. Information 2023, 14, 210. [Google Scholar] [CrossRef]
- Halomoan, J.; Ramli, K.; Sudiana, D.; Gunawan, T.S.; Salman, M. ECG-Based Driving Fatigue Detection Using Heart Rate Variability Analysis with Mutual Information. Information 2023, 14, 539. [Google Scholar] [CrossRef]
- Mu, S.; Liao, S.; Tao, K.; Shen, Y. Intelligent fatigue detection based on hierarchical multi-scale ECG representations and HRV measures. Biomed. Signal Process. Control 2024, 92, 106127. [Google Scholar] [CrossRef]
- Gao, Q.; Xu, D.; Luo, Q.; Xing, Z. Efficient behavior recognition algorithm based on deep dynamic feature dual stream network. Comput. Appl. Softw. 2024, 41, 175–181+189. [Google Scholar]
- Ding, E.; Xu, D.; Zhao, Y.; Liu, Z.; Liu, Y. Attention-based 3D convolutional networks. J. Exp. Theor. Artif. Intell. 2023, 35, 93–108. [Google Scholar] [CrossRef]
- Liu, L.; Lin, M.; Zhong, L.; Peng, W.; Qu, C.; Pan, J. Abnormal behavior detection based on 3D dual stream convolutional neural network. Comput. Syst. Appl. 2021, 30, 120–127. [Google Scholar] [CrossRef]
- Ma, Y.; Xie, Z.; Chen, S.; Qiao, F.; Li, Z. Real-time detection of abnormal driving behavior based on long short-term memory network and regression residuals. Transp. Res. Part C 2023, 146, 103983. [Google Scholar] [CrossRef]
- Kim, B.S.; Woo, Y.T.; Yu, Y.H.; Hwang, H.G. Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels. J. Ocean Eng. Technol. 2021, 35, 91–97. [Google Scholar] [CrossRef]
- Cheng, X.; Zheng, J.; Lin, J.; Wang, Z.; Wu, J.; Yan, Y. Detecting Abnormal Behaviors of Workers at Ship Working Fields via Asynchronous Interaction Aggregation Network. J. Transp. Inf. Saf. 2022, 40, 22–29. [Google Scholar] [CrossRef]
- Rizk, M.; Slim, F.; Baghdadi, A.; Diguet, J.P. Towards Real-Time Human Detection in Maritime Environment Using Embedded Deep Learning. In Proceedings of the International Conference on System-Integrated Intelligence, Kalaburagi, India, 24–25 November 2023. [Google Scholar]
- Srđan, B.; Milan, M.; Saša, A.; Muzafer, S.; Nemanja, M.; Milan, G. A Novel Fingerprint Biometric Cryptosystem Based on Convolutional Neural Networks. Mathematics 2021, 9, 730. [Google Scholar] [CrossRef]
- Tanushree, S.; Simi, Z.; Shazia, A. Effect of pupil dilation on biometric iris recognition systems for personal authentication. Indian J. Ophthalmol. 2023, 71, 57–61. [Google Scholar]
- Park, Y.Y.; Joung, G.H.; Song, Y.S.; Yun, N.S.; Kim, J.M. A Study on the Safety Management System of a Passenger Ship Using Biometrics. J. Nanoelectron. Optoelectron. 2016, 11, 194–197. [Google Scholar] [CrossRef]
- Jayavadivel, R.; Prabaharan, P. Investigation on automated surveillance monitoring for human identification and recognition using face and iris biometric. J. Ambient Intell. Humaniz. Comput. 2021, 12, 10197–10208. [Google Scholar] [CrossRef]
- Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Liu, L.; Wang, S.; Hu, B.; Qiong, Q.Y.; Wen, J.H.; Rosenblum, D.S. Learning structures of interval-based Bayesian networks in probabilistic generative model for human complex activity recognition. Pattern Recogn. 2018, 81, 545–561. [Google Scholar] [CrossRef]
- Epaillard, E.; Bouguila, N. Variational Bayesian Learning of Generalized Dirichlet-Based Hidden Markov Models Applied to Unusual Events Detection. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1034–1047. [Google Scholar] [CrossRef]
- Isupova, O.; Kuzin, D.; Mihaylova, L. Learning Methods for Dynamic Topic Modeling in Automated Behavior Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 3980–3993. [Google Scholar] [CrossRef]
- Sabokrou, M.; Fayyaz, M.; Fathy, M.; Klette, R. Deep-Cascade: Cascading 3D Deep Neural Networks for Fast Anomaly Detection and Localization in Crowded Scenes. IEEE Trans. Image Process. 2017, 26, 1992–2004. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, W.; Sun, Y.; Liu, F.; Yin, Q.; Ma, W.; Zhao, J. Mining Image Anomaly Detection Algorithm Based on Faster-RCNN and Self-attention Mechanism. Met. Mine 2024, 53, 196–201. [Google Scholar] [CrossRef]
- Kang, J.; Tian, Y.; Yang, G. Research on crowd abnormal behavior detection algorithm based on improved SSD. Infrared Technol. 2022, 44, 1316–1323. [Google Scholar]
- Chen, N.; Man, Y.; Yuan, H.; Dong, J.; Ning, W.; Li, J. Design and Implementation of a General Aviation Pilot Abnormal Behavior Detection and Early Warning System. Res. Explor. Lab. 2022, 41, 71–75. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G.R.; Stoken, A.; Borovec, J.; NanoCode; Chaurasia, A.; TaoXie; Liu, C.; Abhiram, V.; Laughing; Tkianai; et al. ultralytics/yolov5: v5.0—YOLOv5-P6 1280 Models, AWS, Supervise.ly and YouTube Integrations. 2021. Available online: https://ui.adsabs.harvard.edu/abs/2021zndo...4679653J/abstract (accessed on 11 November 2021).
- Sanjay, R.; Manoj, D. CViT: A Convolution Vision Transformer for Video Abnormal Behavior Detection and Localization. SN Comput. Sci. 2023, 4, 829. [Google Scholar]
- Ganagavalli, K.; Santhi, V. YOLO-based anomaly activity detection system for human behavior analysis and crime mitigation. Signal Image Video Process. 2024, 18, 417–427. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, D.; Krhenbühl, P. Objects as Points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Qin, H.; Yan, J.; Li, X.; Hu, X. Joint Training of Cascaded CNN for Face Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Zhang, K.; Zhang, Z.; Li, Z.; Qiao, Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Process. Lett. 2016, 23, 1499–1503. [Google Scholar] [CrossRef]
- Jiang, H.; Learned-Miller, E. Face Detection with the Faster R-CNN. In Proceedings of the 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 30 May–3 June 2017; pp. 650–657. [Google Scholar]
- Xu, Y.; Yan, W.; Sun, H.; Yang, G.; Luo, J. CenterFace: Joint Face Detection and Alignment Using Face as Point. Sci. Program. 2020, 2020, 7845384. [Google Scholar] [CrossRef]
- Schroff, F.; Kalenichenko, D.; Philbin, J. FaceNet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Cao, Q.; Shen, L.; Xie, W.; Parkhi, O.M.; Zisserman, A. VGGFace2: A dataset for recognising faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018. [Google Scholar]
- Deng, J.; Guo, J.; Yang, J.; Xue, N.; Zafeiriou, S.P. ArcFace: Additive Angular Margin Loss for Deep Face Recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Wawrzyniak, N.; Hyla, T.; Popik, A. Vessel detection and tracking method based on video surveillance. Sensors 2019, 19, 5230. [Google Scholar] [CrossRef]
- Luan, X.; Liu, X.; Liu, Y.; Xie, X. Design of crew appointment and dismissal system based on face recognition. In Proceedings of the Fifth International Conference on Traffic Engineering and Transportation System (ICTETS 2021), Chongqing, China, 24–26 September 2021; pp. 1178–1181. [Google Scholar]
- Zhou, E.; Fan, H.; Cao, Z.; Jiang, Y.; Yin, Q. Extensive Facial Landmark Localization with Coarse-to-Fine Convolutional Network Cascade. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Sydney, Australia, 1–8 December 2013. [Google Scholar]
- Jeng, S.H.; Liao, H.Y.M.; Han, C.C.; Chern, M.Y.; Liu, Y.T. Facial feature detection using geometrical face model: An efficient approach. Pattern Recogn. 1998, 31, 273–282. [Google Scholar] [CrossRef]
- Life Jacket Detection Dataset. Available online: https://aistudio.baidu.com/datasetdetail/90344 (accessed on 24 May 2024).
- Smoking Detection Dataset. Available online: https://aistudio.baidu.com/datasetdetail/198025/0 (accessed on 24 May 2024).
Dataset | YOLOv5 | Faster R-CNN | SSD | RetinaNet | YOLOv4 |
---|---|---|---|---|---|
COCO | 56.8% | 57.5% | 51.5% | 54.2% | 43.5% |
Pascal VOC | 82.1% | 80.9% | 77.6% | 80.5% | 85.1% |
ImageNet | 74.2% | 73.9% | 71.8% | 72.5% | 69.2% |
Model | YOLOv5 | Faster R-CNN | SSD | RetinaNet | YOLOv4 |
---|---|---|---|---|---|
Model input | 640 × 640 | 1000 × 600 | 300 × 300 | 800 × 800 | 608 × 608 |
NVIDIA GeForce GTX 1080 Ti | 60 FPS | 3 FPS | 25 FPS | 7 FPS | 43 FPS |
Different Scenarios | Brightness Variation Scene | Blurred Scenes with Image Jitter | Obstructive and Overlapping Scenes | Other Scenes |
---|---|---|---|---|
Number | 1011 | 209 | 1186 | 1081 |
Proportion | 29% | 6% | 34% | 31% |
Class | Abnormal | Normal | |||
---|---|---|---|---|---|
Nolifevast | Notrainlifevast | Smoke | Nocoat | Lifevast | |
Target number | 4200 | 1400 | 1000 | 600 | 1500 |
Experimental Hardware Configuration | Experimental Software Configuration |
---|---|
CPU: Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz GPU: Nvidia RTX 2060 6 G Laptop Memory: 16 GB DDR4 2933 MHz Hard Disk: 1 TB SSD | Python 3.6.12 CUDA 11.5 CUDNN 8.6.5 Pytorch 1.11.0 Tensorflow 1.13.0 |
Model | [email protected]:0.95/% | [email protected]/% | Inference Time/ms |
---|---|---|---|
YOLO-TRCA | 76.9 | 93.2 | 16.90 |
YOLOv5s | 72.7 | 91.8 | 16.20 |
AIA | 74.4 | 92.8 | 129.03 |
CenterNet | 71.2 | 92.5 | 42.00 |
YOLOv4 | 70.3 | 88.7 | 26.31 |
Image | Figure 16a | Figure 16b | Figure 16c | Figure 16d | Figure 16e | |
---|---|---|---|---|---|---|
Class | ||||||
nolifevast | YOLOv5s (error) | 4 (0) | 6 (0) | — | 2 (1) | — |
YOLO-TRCA (error) | 5 (0) | 11 (0) | — | 2 (1) | — | |
Targets | 6 | 11 | — | 2 | — | |
notrainlifevast | YOLOv5s (error) | — | — | 1 (0) | — | — |
YOLO-TRCA (error) | — | — | 2 (0) | — | — | |
Targets | — | — | 2 | — | — | |
smoke | YOLOv5s (error) | — | — | — | — | 0 (0) |
YOLO-TRCA (error) | — | — | — | — | 2 (0) | |
Targets | — | — | — | — | 2 | |
nocoat | YOLOv5s (error) | — | — | — | — | 2 (0) |
YOLO-TRCA (error) | — | — | — | — | 2 (0) | |
Targets | — | — | — | — | 2 | |
lifevast | YOLOv5s (error) | — | — | 2 (0) | 5 (1) | — |
YOLO-TRCA (error) | — | — | 3 (0) | 8 (1) | — | |
Targets | — | — | 3 | 11 | — | |
Precision improvement | 16.7% | 45.5% | 40.0% | 23.0% | 50.0% |
C3-TransformerBlock | CBAM-1 | CBAM-2 | CBAM-3 | CIoU-NMS | [email protected]:0.95/% | Inference Time/ms |
---|---|---|---|---|---|---|
× | × | × | × | × | 72.7— | 16.2— |
× | × | × | × | √ | 73.2↑ | 16.6↑ |
√ | × | × | × | √ | 75.6↑ | 16.7↑ |
√ | √ | √ | √ | √ | 76.9↑ | 16.9↑ |
Method | TMR | FMR | FRR | Inference Time/ms |
---|---|---|---|---|
CenterFace + Filter + Arcface | 0.68 | 0.24 | 0.08 | 16.35 |
CenterFace+ Arcface | 0.40 | 0.56 | 0.04 | 15.82 |
MTCNN + FaceRecognition | 0.19 | 0.08 | 0.73 | 432.18 |
MTCNN + POSIT + FaceRecognition | 0.65 | 0.23 | 0.12 | 438.59 |
TL-GAN + VGG-Face | 0.74 | 0.12 | 0.14 | 15,100 |
Test Project | Test Result |
---|---|
Number of abnormal crew image captures | 483 times |
Abnormal crew identity recognition frequency | 336 times |
Average frame time for video processing | 39.5 ms |
Website response delay | 0.68 ms~3.11 ms |
Class | Abnormal | Normal | |||
---|---|---|---|---|---|
Nolifevast | Notrainlifevast | Smoke | Nocoat | Lifevast | |
Accuracy/% | 0.987 | 0.988 | 0.661 | 0.995 | 0.985 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Z.; Zhang, H.; Gao, D.; Wu, Z.; Zhang, Z.; Du, L. ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios. Sensors 2024, 24, 7288. https://doi.org/10.3390/s24227288
Li Z, Zhang H, Gao D, Wu Z, Zhang Z, Du L. ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios. Sensors. 2024; 24(22):7288. https://doi.org/10.3390/s24227288
Chicago/Turabian StyleLi, Zhengbao, Heng Zhang, Ding Gao, Zewei Wu, Zheng Zhang, and Libin Du. 2024. "ACD-Net: An Abnormal Crew Detection Network for Complex Ship Scenarios" Sensors 24, no. 22: 7288. https://doi.org/10.3390/s24227288