Context-Aware DGCN-Based Ship Formation Recognition in Remote Sensing Images
<p>Examples of ship datasets in remote-sensing images. (<b>a</b>) The remote-sensing images are large in scale and require cropping during training. (<b>b</b>) Ships only occupy a small portion of the area in remote-sensing images, and the target features may be lost or obscured after multiple down-sampling operations. (<b>c</b>) The images have cluttered backgrounds (such as islands, port containers, dry docks, and other land targets), making locating the ships in complex backgrounds difficult.</p> "> Figure 2
<p>The overall framework of the proposed method. It adopts the center-point-based detection network to detect ships and get position information <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>c</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>c</mi> <mi>y</mi> </msub> <mo>,</mo> <mi>w</mi> <mo>,</mo> <mi>h</mi> <mo>,</mo> <mi>θ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>, while designing the context-aware DGCN to recognize the ship formation.</p> "> Figure 3
<p>The center-point-based detection model consists of feature extraction and center-point detection modules. The output is the position coordinates and angle information.</p> "> Figure 4
<p>The context-aware dense graph convolution network for ship formation recognition. The feature similarity clustering method is mainly used for ship grouping. The Delaunay triangulation serves as the graph structure of the ship formation. The DGCN aggregates and outputs features for formation classification. In the whole presentation, the graph nodes are treated as identical, and they have nothing to do with color change.</p> "> Figure 5
<p>Angle diagram of ship similarity calculation.</p> "> Figure 6
<p>The graph structure of ship formation based on the Delaunay triangulation. The input is the position coordinates of the center point within the ship formation. The output is the graph structure representation of ship formation for the downstream classification task.</p> "> Figure 7
<p>The illustration of a three-layer Locality Preserving GCN.</p> "> Figure 8
<p>Some typical samples from the HRSC2016 and SGF datasets. (<b>a</b>) Examples from the HRSC2016 dataset; (<b>b</b>) examples from the SGF dataset.</p> "> Figure 9
<p>The standard formation of the ship group. (<b>a</b>–<b>f</b>) are the six different ship formations arranged with different configurations, and they are named Formation1 to Formation6. CV, CG, DD, FFG, and SSN represent aircraft carriers, cruisers, destroyers, frigates, and nuclear submarines, respectively.</p> "> Figure 10
<p>Qualitative detection results on the HRSC2016 and SGF datasets with different methods. (<b>a</b>) The detection results of <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> <mi>CNN</mi> </mrow> </semantics></math>; (<b>b</b>) the detection results of <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>3</mn> </msup> <mi>Det</mi> </mrow> </semantics></math>; (<b>c</b>) the detection results of the proposed method.</p> "> Figure 11
<p>The test results of center structure recognition and isolated ship detection. The (<b>a</b>–<b>f</b>) represent the different ship groups on the SGF dataset. The <b>first row</b> is the visual results of ship center point detection, and the <b>second row</b> plots the test result. Among them, each graph in the second row represents the identified group central structure and marks other isolated ship targets that do not belong to the group central structure. For example, in (<b>a</b>), the yellow dots clearly form the central point structure of the ship groups, and the blue triangles represent isolated ships.</p> "> Figure 12
<p>The visual test results for ship grouping based on feature similarity clustering. The <b>first row</b> shows the visual results of ship center point detection, including different ship groups and some isolated ships. The <b>second row</b> represents the distribution of ship centers. The <b>last row</b> is the distribution of the proposed grouping method. Different colors display the grouping results of the clustering change processes.</p> "> Figure 13
<p>The formation structure representation based on the Delaunay triangulation. The <b>first row</b> shows the visual results of ship grouping. The <b>second row</b> represents the distribution of ship centers. The <b>last row</b> is the graph structure of the different formations.</p> "> Figure 14
<p>The qualitative experimental results of formation recognition. (<b>a</b>) The recognition result of formation 1. (<b>b</b>) The recognition result of formation 2. (<b>c</b>) The recognition result of formation 6. (<b>d</b>) The recognition result of formation 5. (<b>e</b>) The recognition result of formations 2 and 6. (<b>f</b>) The recognition result of formations 2 and 3. The yellow circles are the center points of the ships, and the groups connected in green represent ship formations.</p> "> Figure 15
<p>The ship groups and its peripheral contour. (<b>a</b>–<b>c</b>) show the different identified formations, and (<b>d</b>–<b>f</b>) display the convex hull (green) and the outer quadrilateral (orange) of ship formations.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Object Detection
2.2. Ship Target Detection
2.3. Formation Recognition
3. Materials and Methods
3.1. Overview
3.2. Center-Point-Based Method for Ship Detection
3.2.1. Feature Extraction
3.2.2. Center Point Based Detection
3.3. Context-Aware DGCN for Ship Formation Recognition
3.3.1. Ship Grouping Based on Feature Similarity Clustering
3.3.2. Formation Structure Representation Based on Delaunay Triangulation
3.3.3. Formation Classification Based on Context-Aware DGCN
4. Experimental Results and Analysis
4.1. Datasets and Annotation
4.2. Experiment Details and Evaluation Index
4.3. Experiment Results of Ship Detection
4.4. Experiment Results of Formation Recognition
4.4.1. Ship Grouping and Formation Structure Representation
4.4.2. Formation Classification
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cui, X. Research on Arbitrary-Oriented Ship Detection in Optical Remote Sensing Images. Ph.D. Thesis, University of Chinese Academy of Science, Beijing, China, 2021. [Google Scholar]
- Dong, C. Research on the Detection of Ship Targets on the Sea Surface in Optical Remote Sensing Image. Ph.D. Thesis, University of Chinese Academy of Science, Changchun, China, 2020. [Google Scholar]
- Chen, L.; Shi, W.; Deng, D. Improved YOLOv3 Based on Attention Mechanism for Fast and Accurate Ship Detection in Optical Remote Sensing Images. Remote Sens. 2021, 13, 660. [Google Scholar] [CrossRef]
- Meroufel, H.; El Amin Larabi, M.; Amri, M. Deep Learning based Ships Detections from ALSAT-2 Satellite Images. In Proceedings of the 2022 IEEE Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Istanbul, Turkey, 7–9 March 2022; pp. 86–89. [Google Scholar]
- Hu, J.; Zhi, X.; Shi, T.; Zhang, W.; Cui, Y.; Zhao, S. PAG-YOLO: A Portable Attention-Guided YOLO Network for Small Ship Detection. Remote Sens. 2021, 13, 3059. [Google Scholar] [CrossRef]
- Zhang, C.; Gao, G.; Liu, J.; Duan, D. Oriented Ship Detection Based on Soft Thresholding and Context Information in SAR Images of Complex Scenes. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
- Bai, A.; Chen, J.; Yang, W.; Men, Z.; Zhang, Z.; Zeng, S.; Xu, H.; Cao, W.; Jian, C. Leveraging Permuted Image Restoration for Improved Interpretation of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
- Yu, W.; Li, J.; Wang, Z.; Yu, Z.; Luo, Y.; Liu, Y.; Feng, J. Detecting rotated ships in SAR images using a streamlined ship detection network and gliding phases. Remote Sens. Lett. 2024, 15, 413–422. [Google Scholar] [CrossRef]
- Mou, F.; Fan, Z.; Jiang, C.; Zhang, Y.; Wang, L.; Li, X. Double Augmentation: A Modal Transforming Method for Ship Detection in Remote Sensing Imagery. Remote Sens. 2024, 16, 600. [Google Scholar] [CrossRef]
- Deng, H.; Zhang, Y. FMR-YOLO: Infrared Ship Rotating Target Detection Based on Synthetic Fog and Multiscale Weighted Feature Fusion. IEEE Trans. Instrum. Meas. 2024, 73, 1–17. [Google Scholar] [CrossRef]
- Song, J.; Kim, D.; Hwang, J.; Kim, H.; Li, C.; Han, S.; Kim, J. Effective Vessel Recognition in High Resolution SAR Images Using Quantitative and Qualitative Training Data Enhancement from Target Velocity Phase Refocusing. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–14. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, J.; Xia, Y.; Xiao, H. DBW-YOLO: A High-Precision SAR Ship Detection Method for Complex Environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 7029–7039. [Google Scholar] [CrossRef]
- Alina, C.; Sylvie, L.; Sidonie, L.; Arnaud, W. Deep-NFA: A deep a contrario framework for tiny object detection. Pattern Recognit. 2024, 150, 110312. [Google Scholar]
- Han, Y.; Guo, J.; Yang, H.; Guan, R.; Zhang, T. SSMA-YOLO: A Lightweight YOLO Model with Enhanced Feature Extraction and Fusion Capabilities for Drone-Aerial Ship Image Detection. Drones 2024, 8, 145. [Google Scholar] [CrossRef]
- Chen, X.; Guan, J.; Liu, N.; He, Y. Maneuvering Target Detection via Radon-Fractional Fourier Transform-Based Long-Time Coherent Integration. IEEE Trans. Signal Process. 2014, 62, 939–953. [Google Scholar] [CrossRef]
- Liang, F.; Zhou, Y.; Li, H.; Feng, X.; Zhang, J. Multi-Aircraft Formation Recognition Method of Over-the-Horizon Radar Based on Deep Transfer Learning. IEEE Access 2022, 10, 115411–115423. [Google Scholar] [CrossRef]
- Lin, Z.; Zhang, X.; Hao, N.; He, F. An LSTM-based Fleet Formation Recognition Algorithm. In Proceedings of the 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 8565–8569. [Google Scholar]
- Zhou, Q. Research on UAV Target Detection and Formation Recognition Based on Computer Vision. Master’s Thesis, China Academic of Electronics and Information Technology, Beijing, China, 2021. [Google Scholar]
- You, X.; Li, H. A Sea-Land Segmentation Scheme based on Statistical Model of Sea. In Proceedings of the 4th International Congress on Image and Signal Processing (CISP), Shanghai, China, 12 December 2011; pp. 1155–1159. [Google Scholar]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 2007, 9, 62–66. [Google Scholar] [CrossRef]
- Jian, X.; Fu, K.; Xian, S. An Invariant Generalized Hough Transform Based Method of Inshore Ships Detection. In Proceedings of the 2011 International Symposium on Image and Data Fusion (ISIDF), Yunnan, China, 9–11 August 2011. [Google Scholar]
- Zhang, Z.; Warrell, J.; Torr, P. Proposal Generation for Object Detection Using Cascaded Ranking SVMs. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 1497–1504. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, X.; Yang, X.; Zhang, S.; Li, Y.; Feng, L.; Fang, S.; Chen, K.; Zhang, W. Consistent-Teacher: Towards Reducing Inconsistent Pseudo-Targets in Semi-Supervised Object Detection. In Proceedings of the 2023 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Oxford, UK, 15–17 September 2023; pp. 3240–3249. [Google Scholar]
- Pang, J.; Li, C.; Shi, J.; Xu, Z.; Feng, H. R2-CNN: Fast Tiny Object Detection in Large-Scale Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5512–5524. [Google Scholar] [CrossRef]
- Wang, P.; Sun, X.; Diao, W.; Fu, K. FMSSD: Feature-Merged Single-Shot Detection for Multiscale Objects in Large-Scale Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3377–3390. [Google Scholar] [CrossRef]
- Yang, X.; Yang, J.; Yan, J.; Zhang, Y.; Zhang, T.; Guo, Z.; Sun, X.; Fu, K. SCRDet: Towards More Robust Detection for Small, Cluttered and Rotated Objects. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8232–8241. [Google Scholar]
- Zhang, J.; Shi, X.; Zheng, C.; Wu, J.; Li, Y. MRPFA-Net for Shadow Detection in Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [Google Scholar] [CrossRef]
- Xu, X.; Yang, Z.; Li, J. AMCA: Attention-Guided Multi-Scale Context Aggregation Network for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–19. [Google Scholar] [CrossRef]
- Zhao, L.; Zhu, M. MS-YOLOv7: YOLOv7 Based on Multi-Scale for Object Detection on UAV Aerial Photography. Drones 2023, 7, 188. [Google Scholar] [CrossRef]
- Liu, C.; Yang, D.; Tang, L.; Zhou, X.; Deng, Y. A Lightweight Object Detector Based on Spatial-Coordinate Self-Attention for UAV Aerial Images. Remote Sens. 2023, 15, 83. [Google Scholar] [CrossRef]
- Li, Q.; Mou, L.; Liu, Q.; Wang, Y.; Zhu, X. HSF-Net: Multiscale Deep Feature Embedding for Ship Detection in Optical Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7147–7161. [Google Scholar] [CrossRef]
- Lei, F.; Wang, W.; Zhang, W. Ship Extraction Using Post CNN from High Resolution Optical Remotely Sensed Images. In Proceedings of the IEEE 3rd Information Technology, Networking, Electronic, and Automation Control Conference (ITNEC), Chengdu, China, 15–17 March 2019; pp. 2531–2535. [Google Scholar]
- Chen, W.; Han, B.; Yang, Z.; Gao, X. MSSDet: Multi-scale Ship-detection Framework in Optical Remote-sensing Images and New Benchmark. Remote Sens. 2022, 14, 5460. [Google Scholar] [CrossRef]
- Zhang, T.; Lou, X.; Wang, H.; Cheng, Y. Context-Preserving Region-based Contrastive Learning Framework for Ship Detection in SAR. J. Signal Process. Syst. Signal Image Video Technol. 2022, 95, 3–12. [Google Scholar] [CrossRef]
- Yang, X.; Sun, H.; Fu, K.; Yang, J.; Sun, X.; Yan, M.L.; Guo, Z. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes based on Multiscale Rotation Dense Feature Pyramid Networks. Remote Sens. 2018, 10, 132. [Google Scholar] [CrossRef]
- Nie, M.; Zhang, J.; Zhang, X. Ship Segmentation and Orientation Estimation using Key-points Detection and Voting Mechanism in Remote Sensing Images. In Proceedings of the 16th International Symposium on Neural Networks (ISNN), Moscow, Russia, 10–12 July 2019; pp. 402–413. [Google Scholar]
- Zhu, M.; Hu, G.P.; Li, S.; Zhou, H.; Wang, S. FSFADet: Arbitrary-oriented Ship Detection for SAR Images based on Feature Separation and Feature Alignment. Neural Process. Lett. 2022, 54, 1995–2005. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, R.; Li, Y.; Pan, B. Oriented Ship Detection based on Intersecting Circle and Deformable RoI in Remote Sensing Images. Remote Sens. 2022, 14, 4749. [Google Scholar] [CrossRef]
- Deng, C.; Cao, Z.; Xiao, Y.; Chen, Y.; Fang, Z.; Yan, R. Recognizing the Formations of CVBG based on Multi-viewpoint Context. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 1793–1810. [Google Scholar] [CrossRef]
- Shi, L.; Huang, Z.; Feng, X. Recognizing the Formations of CVBG based on Shape Context Using Electronic Reconnaissance Data. Electron. Lett. 2021, 57, 562–563. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as Points. arXiv 2019, arXiv:1904.07850. [Google Scholar]
- Yu, F.; Wang, D.; Shelhamer, E.; Darrell, T. Deep Layer Aggregation. In Proceedings of the 31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Zhou, Y.; Ye, Q.; Qiu, Q.; Jiao, J. Oriented Response Networks. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4961–4970. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Woo, S.; Park, J.; Lee, J.; Kweon, I. CBAM: Convolutional Block Attention Module. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 13708–13717. [Google Scholar]
- Liu, Y.; Zhai, J. The Representation and Identification of Spatial Graphics for Random Point Cluster. Sci. Surv. Mapp. 2005, 4, 39–42+4. [Google Scholar]
- Liu, T.; Du, Q.; Yan, H. Spatial Similarity Assessment of Point Clusters. Geomat. Inf. Sci. Wuhan Univ. 2011, 36, 1149–1153. [Google Scholar]
- Liang, Z.; Xie, H.; Xie, G. Study on the Calculation Model of Spatial Grouped Point Object Similarity and its Application. Bull. Surv. Mapp. 2016, 3, 111–114. [Google Scholar]
- Liu, Z.; Yuan, L.; Weng, L.; Yang, Y. A High Resolution Optical Satellite Image Dataset for Ship Recognition and Some New Baselines. In Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods (ICPRAM), Porto, Portugal, 24–26 February 2017; pp. 324–331. [Google Scholar]
- Deng, C. Research on Collective Motion Analysis and Recognition. Ph.D. Thesis, Huazhong University of Science and Technology, Wuhan, China, 2016. [Google Scholar]
- Wu, Y.; Xue, H.; Yin, D. Target Clustering in Naval Battlefield Environment Based on DBSCAN Algorithm. J. Nav. Univ. Eng. 2023, 35, 71–76. [Google Scholar]
- Liu, Z.; Hu, J.; Weng, L.; Yang, Y. Rotated Region Based CNN for Ship Detection. In Proceedings of the 24th IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 900–904. [Google Scholar]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [PubMed]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q.; Soc, I. Learning RoI Transformer for Oriented Object Detection in Aerial Images. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 2844–2853. [Google Scholar]
- Qian, W.; Yang, X.; Peng, S.; Yan, J.; Guo, Y. Learning Modulated Loss for Rotated Object Detection. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; pp. 2458–2466. [Google Scholar]
- Newell, A.; Yang, K.; Deng, J. Stacked Hourglass Networks for Human Pose Estimation. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2019; pp. 483–499. [Google Scholar]
- Yang, X.; Yan, J.; Feng, Z.; He, T. R3Det: Refined Single-stage Detector with Feature Refinement for Rotating Object. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; pp. 3163–3171. [Google Scholar]
- Qiu, Z.; Ni, L.; Yao, T.; Liang, J.; Yang, D.; Wang, J. Research on Air Target Classification Method Based on Louvain Algorithm. J. Gun Launch Control. 2023, 1–9. [Google Scholar] [CrossRef]
- Li, R.; Wang, S.; Zhu, F.; Huang, J. Adaptive Graph Convolution Neural Networks. arXiv 2018, arXiv:1801.03226. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. arXiv 2018, arXiv:1710.10903. [Google Scholar]
Parameter | Detection Methods | ||||||
---|---|---|---|---|---|---|---|
PPRN | ROI-Trans | RSDET | CenterNet-Rbb | Ours 1 | |||
Backbone | Resnet101 | Resnet50 | VGG16 | Resnet50 | Resnet50 | Hourglass | DLA |
Image-size | 800 × 800 | 800 × 800 | 800 × 800 | 512 × 800 | 800 × 800 | 1024 × 1024 | 1024 × 1024 |
65.43 | 71.65 | 72.35 | 74.42 | 75.35 | 73.62 | 77.06 | |
67.35 | 73.52 | 74.26 | 76.39 | 77.21 | 75.46 | 78.45 | |
69.25 | 75.44 | 76.21 | 78.28 | 79.19 | 77.35 | 79.89 | |
71.17 | 77.38 | 77.60 | 79.72 | 80.37 | 78.66 | 81.2 | |
F1 score | 0.75 | 0.82 | 0.83 | 0.86 | 0.87 | 0.84 | 0.89 |
FPS | 5 | 1.5 | - | 6 | 15.4 | - | 17.8 |
Performance Parameter | Detection Methods | |||||
---|---|---|---|---|---|---|
RetinaNet-Rbb | SCRDet | CSL | CenterNet-Rbb | Ours 1 | ||
Backbone | Resnet50 | Resnet50 | Resnet50 | Resnet50 | Hourglass | DLA |
Image-size | 512 × 512 | 800 × 800 | 800 × 800 | 512 × 800 | 1024 × 1024 | 1024 × 1024 |
77.67 | 76.18 | 77.49 | 75.38 | 78.96 | 81.62 | |
79.45 | 77.86 | 79.23 | 77.12 | 80.72 | 83.38 | |
81.23 | 79.54 | 80.97 | 78.85 | 82.48 | 85.13 | |
83.02 | 81.32 | 82.78 | 80.63 | 84.29 | 86.92 | |
F1 score | 0.89 | 0.87 | 0.88 | 0.86 | 0.90 | 0.91 |
FPS | 12.3 | 32.6 | 10.3 | 12.5 | 14.7 | 28.2 |
Self-Attention | Backbone | Image-Size | mAP |
---|---|---|---|
+SE [46] | DLA34 | 512 × 512 | 80.92 |
+CBAM [47] | DLA34 | 512 × 512 | 81.12 |
+CA | DLA34 | 512 × 512 | 81.25 |
Number | Number of Ship Groups | Central Structure of Ship Groups | Isolated Ships | Total Number of Ships | Number of Clustering Correctness |
---|---|---|---|---|---|
1 | 1 | Ship Group 1 | 0 | 12 | 12 |
2 | 1 | Ship Group 4 | 0 | 17 | 16 |
3 | 2 | Ship Groups 2/3 | 0 | 23 | 22 |
4 | 1 | Ship Group 6 | 6 | 26 | 22 |
5 | 2 | Ship Groups 2/5 | 13 | 46 | 42 |
6 | 1 | Ship Group 5 | 7 | 26 | 22 |
Method\Number | 1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|---|
K-means | 100 | 88.63 | 85.67 | 79.61 | 83.34 | 78.36 |
K-means++ | 100 | 91.23 | 89.12 | 81.39 | 87.65 | 82.75 |
DBSCAN [55] | 100 | 89.96 | 87.54 | 79.95 | 84.47 | 79.36 |
AGNES [55] | 100 | 89.42 | 86.31 | 78.96 | 83.66 | 78.93 |
Louvain [62] | 100 | 91.49 | 88.68 | 80.98 | 88.37 | 83.84 |
The proposed | 100 | 91.12 | 90.62 | 82.62 | 90.30 | 83.62 |
Methods | (100%) | |
---|---|---|
The method based on image-level recognition | VGG-16 | 53.26 |
ResNet-50 | 55.32 | |
ResNet-101 | 56.21 | |
The method based on graph data recognition | TSC | 68.57 |
AGCN | 74.18 | |
GAT | 73.34 | |
Ours | 75.59 |
Ship Group | Topological Neighbors | Area of Convex Hull/S | Length of the External Rectangle/X | Width of External Rectangle/Y |
---|---|---|---|---|
Ship Group 1 | 46 | 99,585 | 376 | 398 |
Ship Group 2 | 53 | 113,665 | 435 | 402 |
Ship Group 3 | 28 | 30,806 | 276 | 279 |
Ship Groups | |||||
---|---|---|---|---|---|
Ship Group 2\3 | 0.862 | 0.309 | 0.604 | 0.896 | 0.616 |
Ship Group 1\2 | 0.860 | 0.876 | 0.861 | 0.914 | 0.877 |
Ship Group 1\3 | 0.862 | 0.309 | 0.604 | 0.896 | 0.616 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, T.; Yang, X.; Lu, R.; Xie, X.; Wang, S.; Su, S. Context-Aware DGCN-Based Ship Formation Recognition in Remote Sensing Images. Remote Sens. 2024, 16, 3435. https://doi.org/10.3390/rs16183435
Zhang T, Yang X, Lu R, Xie X, Wang S, Su S. Context-Aware DGCN-Based Ship Formation Recognition in Remote Sensing Images. Remote Sensing. 2024; 16(18):3435. https://doi.org/10.3390/rs16183435
Chicago/Turabian StyleZhang, Tao, Xiaogang Yang, Ruitao Lu, Xueli Xie, Siyu Wang, and Shuang Su. 2024. "Context-Aware DGCN-Based Ship Formation Recognition in Remote Sensing Images" Remote Sensing 16, no. 18: 3435. https://doi.org/10.3390/rs16183435