SIVED: A SAR Image Dataset for Vehicle Detection Based on Rotatable Bounding Box
<p>The chips with the size of 512 × 512 in SIVED. (<b>a</b>) Vehicles in different urban scenarios; (<b>b</b>) MSTAR chips after splicing.</p> "> Figure 2
<p>Presentation of annotation example. (<b>a</b>) Visualization of target annotation; (<b>b</b>) presentation of annotation TXT file.</p> "> Figure 3
<p>Presentation of an annotation XML file.</p> "> Figure 4
<p>The construction workflow of SIVED, * represents file name.</p> "> Figure 5
<p>The presentation of vehicles that cannot be extracted effectively: (<b>a</b>) area 1; (<b>b</b>) area 2.</p> "> Figure 6
<p>Example of a small building easily confused with vehicle targets in (<b>a</b>) SAR image and (<b>b</b>) Google Earth.</p> "> Figure 7
<p>Example in regions where trees and vehicles overlap each other, red boxes represent labeled vehicles: (<b>a</b>) completely covered but visible situation; (<b>b</b>) partially covered and partially visible case.</p> "> Figure 8
<p>Structure of SIVED, * represents file name.</p> "> Figure 9
<p>Statistical chart of vehicle scale size in SIVED: (<b>a</b>) entire dataset; (<b>b</b>) training set; (<b>c</b>) valid set; (<b>d</b>) test set.</p> "> Figure 10
<p>Rotatable bounding box long edge definition method.</p> "> Figure 11
<p>Statistical chart of vehicle angle in SIVED: (<b>a</b>) entire dataset; (<b>b</b>) training set; (<b>c</b>) valid set; (<b>d</b>) test set.</p> "> Figure 12
<p>The architecture of the object detection framework.</p> "> Figure 13
<p>The architecture of Faster R-CNN.</p> "> Figure 14
<p>The architecture of RetinaNet.</p> "> Figure 15
<p>The architecture of FCOS.</p> "> Figure 16
<p>The architecture of S<sup>2</sup>A-Net.</p> "> Figure 17
<p>Parameter representation of Gliding Vertex.</p> "> Figure 18
<p>The architecture of Oriented RepPoints.</p> "> Figure 19
<p>Ground truth and detection results of eight detection networks. The red boxes in the first column represent ground truths, while the green boxes in other columns denote detected vehicles. False alarms are shown in yellow boxes, missing vehicles in pink boxes, and bounding boxes with large offsets in blue boxes. (<b>a</b>) Ground truth; (<b>b</b>) Oriented RepPoints; (<b>c</b>) Gliding Vertex; (<b>d</b>) Rotated Faster R-CNN; (<b>e</b>) KLD; (<b>f</b>) Rotated RetinaNet; (<b>g</b>) S<sup>2</sup>A-Net; (<b>h</b>) Rotated FCOS; (<b>i</b>) RoI Transformer.</p> "> Figure 20
<p>Chips of FARAD of different bands: (<b>a</b>) Ka band; (<b>b</b>) X band.</p> ">
Abstract
:1. Introduction
- Using publicly available high-resolution SAR data that includes vehicle targets, we construct the first SAR image dataset in three bands for vehicle detection. The rotatable bounding box annotation is adapted to reduce redundant background clutter and accurately position targets in dense scenes. This dataset can advance vehicle detection development and facilitate vehicle monitoring in complex terrestrial environments.
- An algorithm, combined with a detection network, is proposed for the annotation of MSTAR data to increase annotation efficiency. The annotated files contain enriched information, expanding the potential for various applications.
- Experiments are conducted using eight state-of-the-art rotated detection algorithms, which establish a relevant baseline to evaluate this dataset. The experimental results confirm the dataset’s stability and the adaptability of the current algorithms to vehicle targets.
2. Basic Information about SIVED
3. Dataset Construction
3.1. Data Preprocessing and Selection
3.2. Semi-Automatic Annotation of SIVED
- 1.
- Algorithm Automatic Annotation: In this paper, an automatic annotation algorithm is designed for MSTAR; the input is the MSTAR chips, and the output is the visual annotation box and the coordinate values of the four corner points. After the chip is input, a 30 × 30 area in the center is first selected for masking. The main purpose is to roughly mask the target area to prevent pixel leakage, which affects the clutter distribution estimation. The second step for clutter estimation is to select the Rayleigh distribution [34], whose probability density function (PDF) is described as follows:
- 2.
- Detection Network Annotation: The previous step automatically outputs 5168 annotation boxes for all chips. Next, a total of 2893 chips, about 56% of the total number of chips, are confirmed by visual interpretation, and the position of its annotation boxes are accurate. Then, these chips form a sample set that is fed into the rotated YOLOv5 network for precise detection. Finally, with a total of 50 epochs trained and the weights retained, the remaining 2275 chips are fed into the trained network, and the coordinates of the detection boxes are output.
- 3.
- Manual Correction of Labeling: Eventually, 162 detected boxes are offset, accounting for about 7% of the network output chips. They only contain part of the target or contain more non-target pixels and are corrected by manual labeling. Finally, MSTAR data annotation is complete.
3.3. Dataset Production
3.4. File Structure
4. Dataset Analysis
4.1. Scale Distribution Analysis
- 1.
- Few features are available, meaning less visual information, and extracting features with discriminatory power is complicated. Moreover, this process is easily disturbed by environmental factors.
- 2.
- High localization accuracy is required because small targets cover a small area in the image. An offset of even a single pixel point from the bounding box can cause significant errors in the prediction process.
- 3.
- Dense scenes are common, and small targets in proximity are prone to interference from neighboring targets.
4.2. Angle Distribution Analysis
4.3. Properties of SIVED
5. Methodology
5.1. Object Detection Framework Overview
5.2. Rotated Faster-RCNN
5.3. Rotated RetinaNet
5.4. Rotated FCOS
5.5. S2A-Net
5.6. RoI Transformer
5.7. Gliding Vertex
5.8. Oriented RepPoints
5.9. KLD
6. Experimental Results
6.1. Experimental Setup and Evaluation Metrics
6.2. Experiments for Baselines
6.3. Additional Experiments
- 1.
- An experimental test to establish baseline metrics for two different scenarios: urban and MSTAR simple. The training set comprises data from all scenarios. These metrics can serve as a point of reference for researchers when selecting challenging datasets for their own experiments. As shown in Table 6, the mAP50 score is higher for the MSTAR scene test than for the urban scene, reflecting the greater complexity and interference that exists in urban settings.
- 2.
- To verify the superiority of the SIVED dataset over FARAD and MSTAR, we chose a simple structured Rotated RetinaNet, trained with those three datasets, respectively. We compared the performance of the network trained with those datasets via the specific metric values shown in Table 7. The results show that the constructed SVID dataset improved the network performance.
7. Discussion
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jiao, J.; Zhang, Y.; Sun, H.; Yang, X.; Gao, X.; Hong, W.; Fu, K.; Sun, X. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection. IEEE Access 2018, 6, 20881–20892. [Google Scholar] [CrossRef]
- Cui, Z.; Li, Q.; Cao, Z.; Liu, N. Dense attention pyramid networks for multi-scale ship detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8983–8997. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Liu, C.; Shi, J.; Wei, S.; Ahmad, I.; Zhan, X.; Zhou, Y.; Pan, D.; Li, J. Balance learning for ship detection from synthetic aperture radar remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 182, 190–207. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X. High-speed ship detection in SAR images based on a grid convolutional neural network. Remote Sens. 2019, 11, 1206. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Shi, J.; Wei, S. Depthwise separable convolution neural network for high-speed SAR ship detection. Remote Sens. 2019, 11, 2483. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Shi, J.; Wei, S. HyperLi-Net: A hyper-light deep learning network for high-accurate and high-speed ship detection from synthetic aperture radar imagery. ISPRS J. Photogramm. Remote Sens. 2020, 167, 123–153. [Google Scholar] [CrossRef]
- An, Q.; Pan, Z.; Liu, L.; You, H. DRBox-v2: An improved detector with rotatable boxes for target detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8333–8349. [Google Scholar] [CrossRef]
- Chen, C.; He, C.; Hu, C.; Pei, H.; Jiao, L. MSARN: A deep neural network based on an adaptive recalibration mechanism for multiscale and arbitrary-oriented SAR ship detection. IEEE Access 2019, 7, 159262–159283. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, J.; Zhan, R. R2FA-Det: Delving into high-quality rotatable boxes for ship detection in SAR images. Remote Sens. 2020, 12, 2031. [Google Scholar] [CrossRef]
- Xu, C.; Zhang, B.; Gao, J.; Wu, F.; Zhang, H.; Wang, C. FCOSR: An anchor-free method for arbitrary-oriented ship detection in SAR images. J. Radars 2022, 11, 1–12. [Google Scholar] [CrossRef]
- He, C.; Tu, M.; Xiong, D.; Tu, F.; Liao, M. A Component-Based Multi-Layer Parallel Network for Airplane Detection in SAR Imagery. Remote Sens. 2018, 10, 1016. [Google Scholar] [CrossRef]
- Wang, J.; Xiao, H.; Chen, L.; Xing, J.; Pan, Z.; Luo, R.; Cai, X. Integrating weighted feature fusion and the spatial attention module with convolutional neural networks for automatic aircraft detection from SAR images. Remote Sens. 2021, 13, 910. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, L.; Liu, Z.; Hu, D.; Kuang, G.; Liu, L. Attentional Feature Refinement and Alignment Network for Aircraft Detection in SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
- Zhang, P.; Xu, H.; Tian, T.; Gao, P.; Li, L.; Zhao, T.; Zhang, N.; Tian, J. SEFEPNet: Scale Expansion and Feature Enhancement Pyramid Network for SAR Aircraft Detection with Small Sample Dataset. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 3365–3375. [Google Scholar] [CrossRef]
- Bao, W.; Hu, J.; Huang, M.; Xu, Y.; Ji, N.; Xiang, X. Detecting Fine-Grained Airplanes in SAR Images With Sparse Attention-Guided Pyramid and Class-Balanced Data Augmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8586–8599. [Google Scholar] [CrossRef]
- Ma, C.; Zhang, Y.; Guo, J.; Hu, Y.; Geng, X.; Li, F.; Lei, B.; Ding, C. End-to-end method with transformer for 3-D detection of oil tank from single SAR image. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–19. [Google Scholar] [CrossRef]
- Wu, Q.; Zhang, B.; Xu, C.; Zhang, H.; Wang, C. Dense Oil Tank Detection and Classification via YOLOX-TR Network in Large-Scale SAR Images. Remote Sens. 2022, 14, 3246. [Google Scholar] [CrossRef]
- Xu, X.; Zhang, X.; Zhang, T.; Yang, Z.; Shi, J.; Zhan, X. Shadow-Background-Noise 3D Spatial Decomposition Using Sparse Low-Rank Gaussian Properties for Video-SAR Moving Target Shadow Enhancement. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR dataset of ship detection for deep learning under complex backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef]
- Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Li, J.; Xu, X.; Wang, B.; Zhan, X.; Xu, Y.; Ke, X.; Zeng, T.; Su, H. Sar ship detection dataset (ssdd): Official release and comprehensive data analysis. Remote Sens. 2021, 13, 3690. [Google Scholar] [CrossRef]
- Xu, C.; Su, H.; Li, J.; Liu, Y.; Yao, L.; Gao, L.; Yan, W.; Wang, T. RSDD-SAR: Rotated ship detection dataset in SAR images. J. Radars 2022, 11, 581–599. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y. Ls-ssdd-v1. 0: A deep learning dataset dedicated to small ship detection from large-scale sentinel-1 sar images. Remote Sens. 2020, 12, 2997. [Google Scholar] [CrossRef]
- Keydel, E.R.; Lee, S.W.; Moore, J.T. MSTAR extended operating conditions: A tutorial. In Proceedings of the SPIE 2757, Algorithm for Synthetic Aperture Radar Imagery III, Orlando, FL, USA, 10–12 April 1996; SPIE: Bellingham, WA, USA, 1996; pp. 228–242. [Google Scholar] [CrossRef]
- Long, Y.; Jiang, X.; Liu, X.; Zhang, Y. Sar Atr with Rotated Region Based on Convolution Neural Network. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019. [Google Scholar] [CrossRef]
- Zhang, X.; Chai, X.; Chen, Y.; Yang, Z.; Liu, G.; He, A.; Li, Y. A Novel Data Augmentation Method for SAR Image Target Detection and Recognition. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3581–3584. [Google Scholar] [CrossRef]
- Sun, Y.; Wang, W.; Zhang, Q.; Ni, H.; Zhang, X. Improved YOLOv5 with transformer for large scene military vehicle detection on SAR image. In Proceedings of the 2022 7th International Conference on Image, Vision and Computing (ICIVC), Xi’an, China, 26–28 July 2022; pp. 87–93. [Google Scholar] [CrossRef]
- Complex SAR Data. Available online: https://www.sandia.gov/radar/pathfinder-radar-isr-and-synthetic-aperture-radar-sar-systems/complex-data/ (accessed on 11 October 2022).
- Wang, Z.; Du, L.; Mao, J.; Liu, B.; Yang, D. SAR target detection based on SSD with data augmentation and transfer learning. IEEE Geosci. Remote Sens. Lett. 2018, 16, 150–154. [Google Scholar] [CrossRef]
- Zou, B.; Qin, J.; Zhang, L. Vehicle detection based on semantic-context enhancement for high-resolution SAR images in complex background. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Tang, T.; Wang, Y.; Liu, H.; Zou, S. CFAR-Guided Dual-Stream Single-Shot Multibox Detector for Vehicle Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2009, 88, 303–308. [Google Scholar] [CrossRef]
- Ward, K. Compound representation of high resolution sea clutter. Electron. Lett. 1981, 7, 561–565. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13, 2014. pp. 740–755. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9627–9636. [Google Scholar] [CrossRef]
- Han, J.; Ding, J.; Li, J.; Xia, G.-S. Align deep features for oriented object detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
- Ding, J.; Xue, N.; Long, Y.; Xia, G.-S.; Lu, Q. Learning roi transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2849–2858. [Google Scholar] [CrossRef]
- Xu, Y.; Fu, M.; Wang, Q.; Wang, Y.; Chen, K.; Xia, G.-S.; Bai, X. Gliding vertex on the horizontal bounding box for multi-oriented object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 1452–1459. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Chen, Y.; Hu, K.; Zhu, J. Oriented reppoints for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1829–1838. [Google Scholar] [CrossRef]
- Yang, X.; Yang, X.; Yang, J.; Ming, Q.; Wang, W.; Tian, Q.; Yan, J. Learning high-precision bounding box for rotated object detection via kullback-leibler divergence. Adv. Neural Inf. Process. Syst. 2021, 34, 18381–18394. [Google Scholar] [CrossRef]
- Lin, Z.; Ji, K.; Leng, X.; Kuang, G. Squeeze and excitation rank faster R-CNN for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 751–755. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery. Remote Sens. 2019, 11, 531. [Google Scholar] [CrossRef]
- Sun, Z.; Dai, M.; Leng, X.; Lei, Y.; Xiong, B.; Ji, K.; Kuang, G. An anchor-free detection method for ship targets in high-resolution SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7799–7816. [Google Scholar] [CrossRef]
- Zhou, Y.; Yang, X.; Zhang, G.; Wang, J.; Liu, Y.; Hou, L.; Jiang, X.; Liu, X.; Yan, J.; Lyu, C. Mmrotate: A rotated object detection benchmark using pytorch. In Proceedings of the 30th ACM International Conference on Multimedia, Lisbon, Portugal, 10 October 2022; pp. 7331–7334. [Google Scholar] [CrossRef]
Data | Source | Location | Band | Polarization | Resolution |
---|---|---|---|---|---|
FARAD | Sandia National Laboratory | Albuquerque, NM, USA | Ka/X | VV/HH | 0.1 m × 0.1 m |
MiniSAR | Sandia National Laboratory | Albuquerque, NM, USA | Ku | - | 0.1 m × 0.1 m |
MSTAR | U.S. Air Force | - | X | HH | 0.3 m × 0.3 m |
Scene | Train | Valid | Test | Total | ||
---|---|---|---|---|---|---|
number of chips | urban | 578 | 72 | 71 | 721 | 1044 |
MSTAR | 259 | 32 | 32 | 323 | ||
number of vehicles | urban | 5417 | 710 | 718 | 6845 | 12,013 |
MSTAR | 4144 | 512 | 512 | 5168 |
Type | Area |
---|---|
Small target | |
Medium target | |
Large target |
Metrics | Explanation |
---|---|
mAP | mAP when IOU = 0.5:0.05:0.95 |
mAP50 | mAP when IOU = 0.5 |
mAP75 | mAP when IOU = 0.75 |
Network | Recall * | Precision * | mAP * | mAP75 * | mAP50 * |
---|---|---|---|---|---|
RoI Transformer | 95.61 | 84.40 | 37.45 | 16.91 | 93.47 |
Rotated FCOS | 88.86 | 96.47 | 50.40 | 48.13 | 95.60 |
S2A-Net | 97.48 | 90.90 | 55.49 | 57.32 | 97.72 |
Rotated RetinaNet | 97.48 | 92.73 | 53.11 | 50.93 | 97.76 |
KLD | 98.05 | 93.34 | 57.49 | 64.48 | 97.92 |
Rotated Faster R-CNN | 97.80 | 95.55 | 56.03 | 59.00 | 98.09 |
Gliding Vertex | 98.13 | 95.72 | 56.06 | 59.59 | 98.33 |
Oriented RepPoints | 98.05 | 95.11 | 60.15 | 70.69 | 99.13 |
Network | mAP50_all * | mAP50_urban * | mAP50_MSTAR * |
---|---|---|---|
RoI Transformer | 93.47 | 89.23 | 97.91 |
Rotated FCOS | 95.60 | 92.19 | 98.86 |
S2A-Net | 97.72 | 96.69 | 97.91 |
Rotated RetinaNet | 97.76 | 96.45 | 99.32 |
KLD | 97.92 | 97.46 | 98.83 |
Rotated Faster R-CNN | 98.09 | 97.53 | 98.76 |
Gliding Vertex | 98.33 | 96.71 | 100 |
Oriented RepPoints | 99.13 | 98.34 | 99.73 |
Training data | mAP50_FARAD * | mAP50_MSTAR * |
---|---|---|
FARAD | 96.53 | - |
MSTAR | - | 99.24 |
SIVED | 96.77 | 99.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, X.; Zhang, B.; Wu, F.; Wang, C.; Yang, Y.; Chen, H. SIVED: A SAR Image Dataset for Vehicle Detection Based on Rotatable Bounding Box. Remote Sens. 2023, 15, 2825. https://doi.org/10.3390/rs15112825
Lin X, Zhang B, Wu F, Wang C, Yang Y, Chen H. SIVED: A SAR Image Dataset for Vehicle Detection Based on Rotatable Bounding Box. Remote Sensing. 2023; 15(11):2825. https://doi.org/10.3390/rs15112825
Chicago/Turabian StyleLin, Xin, Bo Zhang, Fan Wu, Chao Wang, Yali Yang, and Huiqin Chen. 2023. "SIVED: A SAR Image Dataset for Vehicle Detection Based on Rotatable Bounding Box" Remote Sensing 15, no. 11: 2825. https://doi.org/10.3390/rs15112825