RAISE: Rank-Aware Incremental Learning for Remote Sensing Object Detection
<p>Schematics of the initial scenario and the instance incremental scenario. Provided that the class is constant before and after, new instances appear continuously over time.</p> "> Figure 2
<p>Method structure of RAIIL. The topmost two subfigures display the two parts, IR and RAIL, of RAIIL, respectively. In IR, for unlabeled new instances, the uncertainty (UC)-based rank-score is calculated using Formula (<a href="#FD1-symmetry-14-01020" class="html-disp-formula">1</a>) and ranked towards high rank-score values; for old instances, the UC- and inaccuracy (IA)-based rank-score is calculated using Formula (<a href="#FD4-symmetry-14-01020" class="html-disp-formula">4</a>), and old instances are ranked towards a diverse rank-score value. After new instances are labeled, they are rescored in terms of the UC- and IA-based rank-score. The two kinds of data finally construct an incremental training dataset after ranking and sampling. In RAIL, the loss function adopts a rank-aware loss (Formula (<a href="#FD5-symmetry-14-01020" class="html-disp-formula">5</a>)) when the model is fine-tuning. The bottom-most two subfigures display the calculation methods for the UC and IA, respectively. The UC is a comprehensive estimate of the model’s non-determinacy level with respect to the predicted results of the original image and the transformed image, while the IA is an estimate of the degree of non-coincidence with respect to the image’s predicted results and true labels.</p> "> Figure 3
<p>Precision results for multiple sampling proportions of new instances over DIOR. The solid line of the three methods (MB, FL, and FT) represents the average value of 5 random experiments, and the dotted line represents the maximum and minimum values. (<b>a</b>) mAP. (<b>b</b>) AP50. (<b>c</b>) AP75.</p> "> Figure 4
<p>Precision results for multiple sampling proportions of new instances over DOTA. The solid line of the three methods (MB, FL, and FT) represents the average value of 5 random experiments, and the dotted line represents the maximum and minimum values. (<b>a</b>) mAP. (<b>b</b>) AP50. (<b>c</b>) AP75.</p> "> Figure 5
<p>Precision results for RAIIL and MB at multiple sampling proportions of old instances. (<b>a</b>) mAP. (<b>b</b>) AP50. (<b>c</b>) AP75.</p> "> Figure 6
<p>Statistics for the number of labeled objects for new instances. (<b>a</b>) Statistical results for DIOR. (<b>b</b>) Statistical results for DOTA.</p> "> Figure 7
<p>An example of rank results for some samples of new instances in DIOR. The first column shows the rank results and rank-score based on UC, and the middle two columns visualize the predicted results of the original new instances and the transformed samples; the last column shows the true labels of the samples.</p> ">
Abstract
:1. Introduction
- For the object-detection task for remote sensing images, a rank-aware instance-incremental learning method for the instance incremental scenario, which is an incremental learning paradigm using the learning order and a training strategy for learning streaming data with differing values, is proposed.
- The calculation method for the rank-score was designed based on the uncertainty and inaccuracy of the predicted results to adaptively rank new instances and old instances; meanwhile, a uniform sample weighting direction was provided for model training.
- Experiments were conducted on two widely used remote sensing image datasets, the superiority of the proposed method compared to the existing methods was verified, and the intrinsic effectiveness of the method was verified by an ablation experiment and a hyperparameter analysis experiment.
2. Related Work
2.1. Object Detection for Remote Sensing Images
2.2. Incremental Learning in Deep Learning
3. Methods
3.1. Overview
Algorithm 1: Rank-aware instance-incremental learning (RAIIL). |
3.2. Instance Rank
3.2.1. New Instances Rank for Plasticity
3.2.2. Old Instances Rank for Stability
3.3. Rank-Aware Incremental Learning
4. Materials for Experiments
4.1. Datasets
4.2. Baselines
4.3. Experiment Setup
4.3.1. Division of Datasets for the Instance Incremental Scenario
4.3.2. Sampling Proportion Setting for IL
4.3.3. Training Configuration
4.4. Precision Metrics
5. Results
5.1. Performance
5.1.1. Comparison of All Methods
5.1.2. Further Comparison of MB and RAIIL Involving Old Instances Retention
5.2. Labeling Cost for New Instances
5.3. Ablation Experiment
5.4. Visualization of Rank Results
5.5. Parametric Sensitivity
5.6. Exploring the Influence of Data Order
6. Discussion
6.1. Strengths and Limitations
- The precision results of the experiment show that giving priority to the data sampled according to the rank-score and giving greater weight to the ones with a high rank-score can effectively yield key knowledge. Moreover, it is appropriate to calculate the rank-score based on the uncertainty and inaccuracy of the prediction results. The new instances with high prediction uncertainty have greater learning value. After calculating the rank-score according to the prediction uncertainty and prediction inaccuracy, the representative old instances with various rank-scores have greater learning value.
- This method solves the difficulties of determining the kind of new or old instance as well as how incremental training can be conducted. The results of the ablation experiments show that the proposed method can well integrate the internal structure and that every part of the method is valid.
- Compared with other methods, our method marks fewer data, saving costs on labeling.
6.2. Future Research Directions
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ALDOD | Active learning for deep object detection |
AlexNet | ImageNet classification with deep convolutional neural networks |
AP | Average precision |
AP50 | The AP below the threshold 0.5 of IoU |
AP75 | The AP below the threshold 0.75 of IoU |
BING | Binarized normed gradient |
CNN | Convolution neural network |
COCO | Common objects in context |
EWC | Elastic weight consolidation |
Fast R-CNN | Upgraded version of R-CNN |
Faster R-CNN | Upgraded version of Fast R-CNN |
FL | Focal loss |
FPN | Feature pyramid network |
FT | Fine-tuning |
GEM | Gradient episodic memory |
GoogleNet | Going deeper with convolutions |
GPU | Graphic processing unit |
HOG | Histogram of oriented gradient |
IA | Inaccuracy |
iCaRL | Incremental classifier and representation learning |
IL | Incremental learning |
IoU | Intersection over union |
IR | Instance rank |
LBPs | Local binary patterns |
LwF | Learning without forgetting |
mAP | The mean of AP below all thresholds (0.5:0.05:0.85) of IoU |
MB | Memory buffer |
NIR | New instances rank |
OIR | Old instances rank |
RAIIL | Rank-aware instance-incremental learning |
RAIL | Rank-aware incremental learning |
R-CNN | Region-based CNN |
ResNet | Residual network |
RetinaNet | One-stage detector with dense sampling for focal loss |
ROI | Region of interest |
RPN | Region proposal network |
RS | Rank-score |
UC | Uncertainty |
VGG | Visual geometry group |
YOLO | You only look once |
YOLO9000 | Better, faster, stronger YOLO |
References
- Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens. 2016, 117, 11–28. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bengio, Y.; Hinton, G.E. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Li, Z.; Hoiem, D. Learning without Forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2935–2947. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.C.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef] [Green Version]
- Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; Lampert, C.H. iCaRL: Incremental Classifier and Representation Learning. arXiv 2016, arXiv:1611.07725. [Google Scholar]
- Lopez-Paz, D.; Ranzato, M.A. Gradient Episodic Memory for Continual Learning. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Lomonaco, V.; Maltoni, D. CORe50: A New Dataset and Benchmark for Continuous Object Recognition. arXiv 2017, arXiv:1705.03550. [Google Scholar]
- French, R.M. Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 1999, 3, 128–135. [Google Scholar] [CrossRef]
- Dhar, P.; Singh, R.V.; Peng, K.C.; Wu, Z.; Chellappa, R. Learning without Memorizing. arXiv 2019, arXiv:1811.08051. [Google Scholar]
- Peng, J.; Tang, B.; Jiang, H.; Li, Z.; Lei, Y.; Lin, T.; Li, H. Overcoming Long-Term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation. IEEE Trans. Neural Netw. 2021, 3, 1–14. [Google Scholar] [CrossRef]
- Mai, Z.; Li, R.; Kim, H.; Sanner, S. Supervised Contrastive Replay: Revisiting the Nearest Class Mean Classifier in Online Class-Incremental Continual Learning. In Proceedings of the Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Liu, Y.; Hong, X.; Tao, X.; Dong, S.; Shi, J.; Gong, Y. Model Behavior Preserving for Class-Incremental Learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 3, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Shmelkov, K.; Schmid, C.; Alahari, K. Incremental Learning of Object Detectors without Catastrophic Forgetting. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Hao, Y.; Fu, Y.; Jiang, Y.-G.; Tian, Q. An End-to-End Architecture for Class-Incremental Object Detection with Knowledge Distillation. In Proceedings of the International Conference on Multimedia and Expo, Shanghai, China, 8–12 July 2019. [Google Scholar]
- Peng, C.; Zhao, K.; Lovell, B.C. Faster ILOD: Incremental learning for object detectors based on faster RCNN. Pattern Recognit. Lett. 2020, 140, 109–115. [Google Scholar] [CrossRef]
- Joseph, K.J.; Rajasegaran, J.; Khan, S.; Khan, F.S.; Balasubramanian, V.N.; Shao, L. Incremental Object Detection via Meta-Learning. arXiv 2020, arXiv:2003.08798. [Google Scholar]
- Chen, J.; Wang, S.; Chen, L.; Cai, H.; Qian, Y. Incremental Detection of Remote Sensing Objects with Feature Pyramid and Knowledge Distillation. IEEE Trans. Geosci. Remote Sens. 2020, 12, 5600413. [Google Scholar] [CrossRef]
- Brust, C.-A.; Käding, C.; Denzler, J. Active Learning for Deep Object Detection. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019. [Google Scholar]
- Dong, X.; Gu, S.; Zhuge, W.; Luo, T.; Hou, C. Active label distribution learning. Neurocomputing 2021, 436, 12–21. [Google Scholar] [CrossRef]
- Lei, Z.; Zeng, Y.; Liu, P.; Su, X. Active Deep Learning for Hyperspectral Image Classification with Uncertainty Learning. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5502405. [Google Scholar] [CrossRef]
- Lu, Q.; Wei, L. Multiscale Superpixel-Based Active Learning for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5503405. [Google Scholar] [CrossRef]
- Ding, C.; Zheng, M.; Chen, F.; Zhang, Y.; Zhuang, X.; Fan, E.; Wen, D.; Zhang, L.; Wei, W.; Zhang, Y. Hyperspectral Image Classification Promotion Using Clustering Inspired Active Learning. Remote Sens. 2022, 14, 596. [Google Scholar] [CrossRef]
- Shrivastava, A.; Gupta, A.; Girshick, R. Training Region-Based Object Detectors with Online Hard Example Mining. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Han, P.; Li, Q.; Ma, C.; Xu, S.; Bu, S.; Zhao, Y.; Li, K. HMMN: Online metric learning for human re-identification via hard sample mining memory network. Eng. Appl. Artif. Intell. 2021, 106, 104489. [Google Scholar] [CrossRef]
- Ren, P.; Xiao, Y.; Chang, X.; Huang, P.Y.; Wang, X. A Survey of Deep Active Learning. ACM Comput. Surv. 2021, 54. [Google Scholar] [CrossRef]
- Konstantinidis, D.; Stathaki, T.; Argyriou, V.; Grammalidis, N. Building Detection Using Enhanced HOG–LBP Features and Region Refinement Processes. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 888–905. [Google Scholar] [CrossRef] [Green Version]
- Tuermer, S.; Kurz, F.; Reinartz, P.; Stilla, U. Airborne Vehicle Detection in Dense Urban Areas Using HoG Features and Disparity Maps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2327–2337. [Google Scholar] [CrossRef]
- Diao, W.; Sun, X.; Zheng, X.; Dou, F.; Wang, H.; Fu, K. Efficient Saliency-Based Object Detection in Remote Sensing Images Using Deep Belief Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 137–141. [Google Scholar] [CrossRef]
- Zheng, J.; Xi, Y.; Feng, M.; Li, X.; Li, N. Object detection based on BING in optical remote sensing images. In Proceedings of the International Congress on Image and Signal Processing, Datong, China, 15–17 October 2016. [Google Scholar]
- Yang, F.; Xu, Q.; Li, B. Ship Detection from Optical Satellite Images Based on Saliency Segmentation and Structure-LBP Feature. IEEE Geosci. Remote Sens. Lett. 2017, 14, 602–606. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Sun, W.; Wang, R. Fully Convolutional Networks for Semantic Segmentation of Very High Resolution Remotely Sensed Images Combined With DSM. IEEE Geosci. Remote Sens. Lett. 2018, 15, 474–478. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Cui, Z.; Yang, W.; Chen, L.; Li, H. MKN: Metakernel Networks for Few Shot Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4705611. [Google Scholar] [CrossRef]
- Li, H.; Li, Y.; Zhang, G.; Liu, R.; Huang, H.; Zhu, Q.; Tao, C. Global and Local Contrastive Self-Supervised Learning for Semantic Segmentation of HR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5618014. [Google Scholar] [CrossRef]
- Zhu, J.; Han, X.; Deng, H.; Tao, C.; Zhao, L.; Tao, L.; Li, H. KST-GCN: A Knowledge-Driven Spatial-Temporal Graph Convolutional Network for Traffic Forecasting. IEEE Trans. Intell. Transp. Syst. 2022, 1–12. [Google Scholar] [CrossRef]
- Li, H.; Cao, J.; Zhu, J.; Liu, Y.; Zhu, Q.; Wu, G. Curvature graph neural network. Inf. Sci. 2022, 592, 50–66. [Google Scholar] [CrossRef]
- Chen, L.; Li, Q.; Chen, W.; Wang, Z.; Li, H. A data-driven adversarial examples recognition framework via adversarial feature genomes. Int. J. Intell. Syst. 2022, 1–25. [Google Scholar] [CrossRef]
- Zhong, Y.; Han, X.; Zhang, L. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2018, 138, 281–294. [Google Scholar] [CrossRef]
- Tang, T.; Zhou, S.; Deng, Z.; Lei, L.; Zou, H. Arbitrary-Oriented Vehicle Detection in Aerial Imagery with Single Convolutional Neural Networks. Remote Sens. 2017, 9, 1170. [Google Scholar] [CrossRef] [Green Version]
- Cheng, G.; Zhou, P.; Han, J. Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
- Long, Y.; Gong, Y.; Xiao, Z.; Liu, Q. Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2486–2498. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Redmon, J.; Divvala, S.K.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Zou, Z.; Shi, Z. Random Access Memories: A New Paradigm for Target Detection in High Resolution Aerial Remote Sensing Images. IEEE Trans. Image Process. 2018, 27, 1100–1111. [Google Scholar] [CrossRef]
- Wang, C.; Bai, X.; Wang, S.; Zhou, J.; Ren, P. Multiscale Visual Attention Networks for Object Detection in VHR Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2019, 16, 310–314. [Google Scholar] [CrossRef]
- Zhang, Y.; Yuan, Y.; Feng, Y.; Lu, X. Hierarchical and Robust Convolutional Neural Network for Very High-Resolution Remote Sensing Object Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5535–5548. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Tian, J.; Chanussot, J.; Li, W.; Tao, R. ORSIm Detector: A Novel Object Detection Framework in Optical Remote Sensing Imagery Using Spatial-Frequency Channel Features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5146–5158. [Google Scholar] [CrossRef] [Green Version]
- Pang, J.; Li, C.; Shi, J.; Xu, Z.; Feng, H. R2-CNN: Fast Tiny Object Detection in Large-Scale Remote Sensing Images. arXiv 2019, arXiv:1902.06042. [Google Scholar]
- Hayes, T.L.; Kafle, K.; Shrestha, R.; Acharya, M.; Kanan, C. REMIND Your Neural Network to Prevent Catastrophic Forgetting. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Luo, Y.; Yin, L.; Bai, W.; Mao, K. An Appraisal of Incremental Learning Methods. Entropy 2020, 22, 1190. [Google Scholar] [CrossRef] [PubMed]
- Hinton, G.; Vinyals, O.; Dean, J. Distilling the Knowledge in a Neural Network. Comput. Sci. 2015, 14, 38–39. [Google Scholar]
- Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
- Xia, G.-S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A Large-Scale Dataset for Object Detection in Aerial Images. In Proceedings of the Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Detectron2. Available online: https://github.com/facebookresearch/detectron2 (accessed on 3 March 2022).
Datasets | Old Instances | New Instances | Test Instances |
---|---|---|---|
DIOR | 5862 | 5863 | 11,725 |
DOTA | 4602 | 4521 | 2796 |
Datasets | Old Instances | New Instances | Test Instances |
---|---|---|---|
DIOR | 32,592 | 35,437 | 124,443 |
DOTA | 54,134 | 52,826 | 31,168 |
Ablation Settings | 20% | 40% | 60% | 80% |
---|---|---|---|---|
NIR | 37.99 | 40.42 | 41.00 | 41.92 |
NIR+RAIL | +0.53 | -0.47 | +0.36 | +0.12 |
NIR+OIR | +1.90 | +1.05 | +0.65 | +0.73 |
RAIIL | +2.32 | +1.41 | +0.87 | +0.85 |
Ablation Settings | 20% | 40% | 60% | 80% |
---|---|---|---|---|
NIR | 61.38 | 64.21 | 64.62 | 65.24 |
NIR+RAIL | +0.91 | −0.35 | +0.95 | +0.26 |
NIR+OIR | +1.81 | +1.05 | +0.48 | +1.12 |
RAIIL | +2.98 | +1.70 | +1.01 | +1.42 |
Ablation Settings | 20% | 40% | 60% | 80% |
---|---|---|---|---|
NIR | 40.84 | 43.65 | 44.36 | 45.70 |
NIR+RAIL | +0.60 | −0.59 | +0.46 | +0.27 |
NIR+OIR | +2.37 | +1.37 | +1.40 | +0.82 |
RAIIL | +2.64 | +1.85 | +1.18 | +0.86 |
Hyperparameters | mAP | AP50 | AP75 |
---|---|---|---|
= 1, = 1 (default) | 40.30 | 64.36 | 43.48 |
= 1, = 0 | 40.29 | 63.38 | 44.00 |
= 1, = 2 | 39.93 | 63.57 | 43.04 |
= 1, = 3 | 39.86 | 63.62 | 43.07 |
= 1, = 4 | 40.45 | 64.58 | 43.72 |
= 1, = 5 | 40.66 (max) | 65.10 (max) | 44.13 (max) |
= 0, = 1 | 39.36 (min) | 61.64 (min) | 42.47 (min) |
= 2, = 1 | 39.62 | 63.99 | 42.92 |
= 3, = 1 | 40.14 | 64.87 | 43.61 |
= 4, = 1 | 40.29 | 64.52 | 43.89 |
= 5, = 1 | 39.83 | 64.50 | 42.89 |
Conditions | Precision Results for MB | Precision Results for RAIIL | ||||
---|---|---|---|---|---|---|
mAP | AP50 | AP75 | mAP | AP50 | AP75 | |
setting 2-1 1 | 39.53 | 62.98 | 42.98 | 40.89 | 64.29 | 44.43 |
setting 2-2 2 | 40.64 | 64.26 | 44.19 | 41.58 | 65.66 | 45.09 |
setting 2-3 3 | 41.72 | 65.33 | 45.45 | 41.75 | 65.05 | 45.67 |
setting 2-4 4 | 42.22 | 65.95 | 46.02 | 42.71 | 66.17 | 46.68 |
Mean of four results under setting 2 | 41.03 | 64.63 | 44.66 | 41.73 | 65.29 | 45.47 |
Mean of four results under setting 1 | 40.73 | 64.05 | 41.69 | 41.69 | 65.64 | 45.27 |
Absolute difference in mean | 0.30 | 0.58 | 2.97 | 0.04 (min) | 0.35 (min) | 0.20 (min) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, H.; Chen, Y.; Zhang, Z.; Peng, J. RAISE: Rank-Aware Incremental Learning for Remote Sensing Object Detection. Symmetry 2022, 14, 1020. https://doi.org/10.3390/sym14051020
Li H, Chen Y, Zhang Z, Peng J. RAISE: Rank-Aware Incremental Learning for Remote Sensing Object Detection. Symmetry. 2022; 14(5):1020. https://doi.org/10.3390/sym14051020
Chicago/Turabian StyleLi, Haifeng, Ye Chen, Zhenshi Zhang, and Jian Peng. 2022. "RAISE: Rank-Aware Incremental Learning for Remote Sensing Object Detection" Symmetry 14, no. 5: 1020. https://doi.org/10.3390/sym14051020