Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning
<p>Adversarial learning with high-level features. The process involves two distinctive components: (i) a classifier that is divided into a convolutional module and a dense module, and (ii) a generative adversarial network (GAN) with two subnetworks: a feature generator <span class="html-italic">G</span> and a feature discriminator <span class="html-italic">D</span>. The convolutional module learns representative high-level features <span class="html-italic">f</span> from labeled hyperspectral samples <span class="html-italic">x</span>. The dense module transforms <span class="html-italic">f</span> into final predictions <math display="inline"><semantics> <mover accent="true"> <mi>y</mi> <mo>^</mo> </mover> </semantics></math>. The GAN provides adversarial learning with the high-level features. Specifically, <span class="html-italic">G</span> maps noise <span class="html-italic">z</span> into high-level feature space to generate fake high-level features <math display="inline"><semantics> <mover accent="true"> <mi>f</mi> <mo>˜</mo> </mover> </semantics></math>. <span class="html-italic">D</span> learns to distinguish <span class="html-italic">f</span> (treated as ‘Real’) from <math display="inline"><semantics> <mover accent="true"> <mi>f</mi> <mo>˜</mo> </mover> </semantics></math> (treated as ‘Fake’). Trained with both the real and the fake high-level features, <span class="html-italic">D</span> captures the feature variability of hyperspectral images and yields a powerful and generalized discriminative capability. We leverage well-trained <span class="html-italic">D</span> as the acquisition heuristic for active learning to measure whether an unlabeled sample is worth querying or not.</p> "> Figure 2
<p>Feature map changes of the feature generator <span class="html-italic">G</span> (<b>left</b>) and the feature discriminator <span class="html-italic">D</span> (<b>right</b>). <span class="html-italic">G</span> starts by a dense layer to expand the input low dimensional (e.g., 100 × 1) noise to an appropriate size (8192 × 1) to be reshaped to a piece of small feature maps (4 × 4 × 512). 2D transposed convolutional layers (Transposed Conv2D) are applied to up-scaling feature maps. Among the procedure, we crop the 16 × 16 feature maps to 15 × 15 by abandoning the last row and the last column to make them up-sampled smoothly to match the target size (17 × 17 × 64). Flattening feature maps with the size yields the generated fake high-level features. <span class="html-italic">D</span> comprises simply of three dense layers and transforms the input real/fake high-level features into real value probabilities.</p> "> Figure 3
<p>Active query of unlabeled samples. Both the convolutional module of the classifier and the well-trained feature discriminator <span class="html-italic">D</span> are frozen. We let unlabeled hyperspectral samples <math display="inline"><semantics> <mover> <mi>x</mi> <mo>¯</mo> </mover> </semantics></math> input the convolutional module to obtain a pool of high-level features <math display="inline"><semantics> <mover> <mi>f</mi> <mo>¯</mo> </mover> </semantics></math>. Following, we put those features into the well-trained <span class="html-italic">D</span> and sort them in the light of estimated probabilities. We query the top <span class="html-italic">K</span> minimums, with which we trace back to the corresponding hyperspectral samples and label them additionally.</p> "> Figure 4
<p>Classification maps on Indian Pines. The first map is the groundtruth. The rest ones are predicted by our employed classifier with the random sampling, the least confidence, the entropy sampling, bayesian active learning disagreement (BALD), and our adversarially learned acquisition heuristic (i.e., our FAAL framework), separately. <a href="#remotesensing-12-03879-t002" class="html-table">Table 2</a> lists what class each color represent. Specifically, the part in black means unlabeled.</p> "> Figure 5
<p>Classification maps on Pavia University. The first map is the groundtruth. The rest ones are predicted by our employed classifier with the random sampling, the least confidence, the entropy sampling, BALD, and our adversarially learned acquisition heuristic (i.e., our FAAL framework), separately. <a href="#remotesensing-12-03879-t003" class="html-table">Table 3</a> lists what class each color represent. Specifically, the part in black means unlabeled.</p> "> Figure 6
<p>Performance after each active learning loop on Indian Pines. The number of initial labeled samples is 80. There are 114, 148, 182, 216, and 250 labeled samples after the first, the second, the third, the fourth, and the fifth active learning loop, respectively. Three general metrics: OA (leftmost), AA (middle), and KAPPA (rightmost) are used for comparison.</p> "> Figure 7
<p>Performance after each active learning loop on Pavia University. The number of initial labeled samples is 45. There are 86, 127, 168, 209, and 250 labeled samples after the first, the second, the third, the fourth, and the fifth active learning loop, respectively. Three general metrics: OA (leftmost), AA (middle), and KAPPA (rightmost) are used for comparison.</p> ">
Abstract
:1. Introduction
- We develop an active deep learning framework, referred to as feature-oriented adversarial active learning (FAAL), for classifying hyperspectral images with limited labeled samples. The FAAL framework integrates a deep learning classifier with an active learning strategy. This improves the learning ability of the deep learning classifier for classifying hyperspectral images with limited labeled samples.
- To the best of our knowledge, neither the focus on high-level features nor the adversarial learning methodology has been explored for active learning-based hyperspectral image classification. In contrast, the active learning within our FAAL framework is characterized by an acquisition heuristic which is established via high-level feature-oriented adversarial learning. Such exploration enables our FAAL framework to comprehensively capture the feature variability of hyperspectral images and thus yield an effective hyperspectral image classification scheme.
- Our FAAL framework achieves state-of-the-art performance on two public hyperspectral image datasets for classifying hyperspectral images with limited labeled samples. The effectiveness of both the full FAAL framework and the adversarially learned acquisition heuristic is validated by rigorous experimental evaluations.
2. Preliminaries
2.1. Active Learning
2.2. Generative Adversarial Networks
3. Feature-Oriented Adversarial Active Learning
3.1. High-level Features from Classifier Division
3.2. Adversarial Learning with High-Level Features
3.3. Active Query of Unlabeled Samples
3.4. Workflow of Full Framework
Algorithm 1 Feature-oriented adversarial active learning. |
1: repeat |
2: Update classifier initially: |
Minimize Equation (1) with initial labeled samples. |
3: Update GAN initially: |
a. Freeze classifier and obtain real high-level features f of current labeled samples. |
b. Generate fake high-level features from noise z. |
c. Update D in terms of minimizing Equation (2). |
d. Generate two groups of fake high-level features and from noise and , respectively. |
e. Update G in terms of minimizing Equations (3) and (4). |
4: for do |
5: Active query of unlabeled samples: |
a. Freeze D. |
b. Query K unlabeled high-level features with minimum estimated probabilities. |
c. Trace back to unlabeled samples and label them. |
d. Merge newly labeled samples with previous ones. |
e. Remove the queried samples from the unlabeled pool. |
6: Update classifier using current labeled samples. |
7: Update GAN using the high-level features of the current labeled samples. |
8: end for |
9: for do |
10: Active query of unlabeled samples. |
11: Update classifier using current labeled samples. |
12: end for |
13: until reaching the given threshold. |
4. Experimental Results and Discussion
4.1. Datasets
4.2. Implementation Details
4.3. Analysis of the Naive Classifier
4.4. Comparison with Other Active Learning Classifiers
4.5. Study on Acquisition Heuristics
5. Conclusions
- As either the GAN or the classifier can be separated from each other, imposing constraints on each of them is feasible. In this scenario, is there an additional constraint capable of capturing the feature variability of hyperspectral images further? As the active query is fully unsupervised and unrestricted by class, is there an additional constraint responsible for learning to be class-balanced?
- High-level feature space is commonly low dimensional compared to data space [60,61,62]. To an extent, dealing with a low dimensional space would reduce the requirement of computational resources and ease the burden of network design. We believe that state-of-the-art feature extraction and band selection methods would come into effect in this direction. Besides, building a low dimensional latent space external to the classifier would be constructive despite the additional burden.
- The task-agnostic property of our acquisition heuristic makes it scalable to other applications, possibly spanning from computer vision to remote sensing. Research on this direction would further examine the effectiveness of the adversarially learned and purely parameterized yet simple acquisition heuristic.
Author Contributions
Funding
Conflicts of Interest
References
- Audebert, N.; Le Saux, B.; Lefevre, S. Deep Learning for Classification of Hyperspectral Data: A Comparative Review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 159–173. [Google Scholar] [CrossRef] [Green Version]
- Wan, Y.; Ma, A.; Zhong, Y.; Hu, X.; Zhang, L. Multiobjective Hyperspectral Feature Selection Based on Discrete Sine Cosine Algorithm. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3601–3618. [Google Scholar] [CrossRef]
- Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in Hyperspectral Image and Signal Processing: A Comprehensive Overview of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef] [Green Version]
- Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A Comparative Review of the Recent Literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
- Luo, F.; Huang, H.; Duan, Y.; Liu, J.; Liao, Y. Local Geometric Structure Feature for Dimensionality Reduction of Hyperspectral Imagery. Remote Sens. 2017, 9, 790. [Google Scholar] [CrossRef] [Green Version]
- Luo, F.; Zhang, L.; Zhou, X.; Guo, T.; Cheng, Y.; Yin, T. Sparse-Adaptive Hypergraph Discriminant Analysis for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1082–1086. [Google Scholar] [CrossRef]
- Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced Spectral Classifiers for Hyperspectral Images: A Review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
- Wang, G.; Ren, P. Delving Into Classifying Hyperspectral Images via Graphical Adversarial Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2019–2031. [Google Scholar] [CrossRef]
- Luo, F.; Zhang, L.; Du, B.; Zhang, L. Dimensionality Reduction With Enhanced Hybrid-Graph Discriminant Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5336–5353. [Google Scholar] [CrossRef]
- Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D-2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.W. Collaborative Learning of Lightweight Convolutional Neural Network and Deep Clustering for Hyperspectral Image Semi-Supervised Classification with Limited Training Samples. ISPRS J. Photogramm. Remote Sens. 2020, 161, 164–178. [Google Scholar] [CrossRef]
- Li, X.; Ding, M.; Pižurica, A. Deep Feature Fusion via Two-Stream Convolutional Neural Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2615–2629. [Google Scholar] [CrossRef] [Green Version]
- Tran, T.; Do, T.T.; Reid, I.; Carneiro, G. Bayesian Generative Active Deep Learning. In Proceedings of the Internetional Conference Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6295–6304. [Google Scholar]
- Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. Deep Learning Classifiers for Hyperspectral Imaging: A Review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
- Yue, Z.; Gao, F.; Xiong, Q.; Wang, J.; Huang, T.; Yang, E.; Zhou, H. A Novel Semi-Supervised Convolutional Neural Network Method for Synthetic Aperture Radar Image Recognition. Cogn. Comput. 2019, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral Image Classification With Convolutional Neural Network and Active Learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
- Samat, A.; Li, J.; Liu, S.; Du, P.; Miao, Z.; Luo, J. Improved Hyperspectral Image Classification by Active Learning Using Pre-Designed Mixed Pixels. Pattern Recognit. 2016, 51, 43–58. [Google Scholar] [CrossRef]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef] [Green Version]
- Jia, S.; Zhuang, J.; Deng, L.; Zhu, J.; Xu, M.; Zhou, J.; Jia, X. 3-D Gaussian–Gabor Feature Extraction and Selection for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8813–8826. [Google Scholar] [CrossRef]
- Ducoffe, M.; Precioso, F. Adversarial Active Learning for Deep Networks: A Margin Based Approach. arXiv 2018, arXiv:1802.09841. [Google Scholar]
- Zhu, J.; Jose, B. Generative Adversarial Active Learning. arXiv 2017, arXiv:1702.07956. [Google Scholar]
- Liu, C.; He, L.; Li, Z.; Li, J. Feature-Driven Active Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 341–354. [Google Scholar] [CrossRef]
- Ni, D.; Ma, H. Active Learning for Hyperspectral Image Classification Using Sparse Code Histogram and Graph-Based Spatial Refinement. Int. J. Remote Sens. 2017, 38, 923–948. [Google Scholar] [CrossRef]
- Zhang, Z.; Pasolli, E.; Crawford, M.M. An Adaptive Multiview Active Learning Approach for Spectral–Spatial Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2557–2570. [Google Scholar] [CrossRef]
- Zhang, Z.; Pasolli, P.; Crawford, M.M.; Tilton, J.C. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 640–654. [Google Scholar] [CrossRef]
- Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active Learning With Convolutional Neural Networks for Hyperspectral Image Classification Using A New Bayesian Approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
- Liu, C.; Li, J.; He, L. Superpixel-Based Semisupervised Active Learning for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 357–370. [Google Scholar] [CrossRef]
- Yoo, D.; Kweon, I.S. Learning Loss for Active Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 93–102. [Google Scholar]
- Jedoui, K.; Krishna, R.; Bernstein, M.S.; Fei-Fei, L. Deep Bayesian Active Learning for Multiple Correct Outputs. arXiv 2019, arXiv:1912.01119. [Google Scholar]
- Kendall, A.; Gal, Y. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5574–5584. [Google Scholar]
- Wang, D.; Shang, Y. A New Active Labeling Method for Deep Learning. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 112–119. [Google Scholar]
- Gal, Y.; Islam, R.; Ghahramani, Z. Deep Bayesian Active Learning with Image Data. In Proceedings of the Internetional Conference Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1183–1192. [Google Scholar]
- Gao, F.; Ma, F.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. Visual saliency modeling for river detection in high-resolution SAR imagery. IEEE Access 2018, 6, 1000–1014. [Google Scholar] [CrossRef] [Green Version]
- Gao, F.; Huang, T.; Sun, J.; Wang, J.; Hussain, A.; Yang, E. A New Algorithm of SAR Image Target Recognition Based on Improved Deep Convolutional Neural Network. Cogn. Comput. 2019, 11, 809–824. [Google Scholar] [CrossRef] [Green Version]
- Jing, L.; Tian, Y. Self-Supervised Visual Feature Learning with Deep Neural Networks: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- Settles, B. Active Learning Literature Survey; Computer Sciences Technical Report 1648; University of Wisconsin–Madison: Madison, WI, USA, 2009. [Google Scholar]
- Vondrick, C.; Ramanan, D. Video Annotation and Tracking with Active Learning. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 12–17 December 2011; pp. 28–36. [Google Scholar]
- Mottaghi, A.; Yeung, S. Adversarial Representation Active Learning. arXiv 2019, arXiv:1912.09720. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein Generative Adversarial Networks. In Proceedings of the Internetional Conference Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 214–223. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved Training of Wasserstein GANs. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5767–5777. [Google Scholar]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
- Feng, J.; Yu, H.; Wang, L.; Cao, X.; Zhang, X.; Jiao, L. Classification of Hyperspectral Images Based on Multiclass Spatial–Spectral Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5329–5343. [Google Scholar] [CrossRef]
- Zhang, M.; Gong, M.; Mao, Y.; Li, J.; Wu, Y. Unsupervised Feature Extraction in Hyperspectral Images Based on Wasserstein Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2669–2688. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Clausi, D.A.; Wong, A. Generative Adversarial Networks and Conditional Random Fields for Hyperspectral Image Classification. IEEE Trans. Cybern. 2020, 50, 3318–3329. [Google Scholar] [CrossRef] [Green Version]
- Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
- Hang, R.; Zhou, F.; Liu, Q.; Ghamisi, P. Classification of Hyperspectral Images via Multitask Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
- Wang, X.; Tan, K.; Du, Q.; Chen, Y.; Du, P. Caps-TripleGAN: GAN-Assisted CapsNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7232–7245. [Google Scholar] [CrossRef]
- He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1579–1597. [Google Scholar] [CrossRef]
- Gao, Q.; Lim, S.; Jia, X. Spectral–Spatial Hyperspectral Image Classification Using A Multiscale Conservative Smoothing Scheme and Adaptive Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7718–7730. [Google Scholar] [CrossRef]
- Tyo, J.S.; Konsolakis, A.; Diersen, D.I.; Olsen, R.C. Principal-Components-Based Display Strategy for Spectral Imagery. IEEE Trans. Geosci. Remote Sens. 2003, 41, 708–718. [Google Scholar] [CrossRef] [Green Version]
- Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I. NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv 2017, arXiv:1701.00160. [Google Scholar]
- Mao, Q.; Lee, H.Y.; Tseng, H.Y.; Ma, S.; Yang, M.H. Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 1429–1437. [Google Scholar]
- Sinha, S.; Ebrahimi, S.; Darrell, T. Variational Adversarial Active Learning. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 5971–5980. [Google Scholar]
- Jia, S.; Deng, X.; Zhu, J.; Xu, M.; Zhou, J.; Jia, X. Collaborative Representation-Based Multiscale Superpixel Fusion for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7770–7784. [Google Scholar] [CrossRef]
- Jia, S.; Deng, X.; Meng, X.; Zhou, J.; Jia, X. Superpixel-Level Weighted Label Propagation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 5077–5091. [Google Scholar] [CrossRef]
- Luo, F.; Du, B.; Zhang, L.; Zhang, L.; Tao, D. Feature Learning Using Spatial-Spectral Hypergraph Discriminant Analysis for Hyperspectral Image. IEEE Trans. Cybern. 2019, 49, 2406–2419. [Google Scholar] [CrossRef]
- Liu, J.; Yao, Y.; Ren, J. An Acceleration Framework for High Resolution Image Synthesis. arXiv 2019, arXiv:1909.03611. [Google Scholar]
- Chen, J.; Xie, Y.; Wang, K.; Zhang, C.; Vannan, M.A.; Wang, B.; Qian, Z. Active Image Synthesis for Efficient Labeling. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef]
Dataset | Sensor | Size | No. Available Bands | Range | GSD | No. Classes |
---|---|---|---|---|---|---|
Indian Pines | AVIRS | 145 × 145 | 200 | 0.4–2.5 m | 20 m | 16 |
Pavia University | ROSIS | 610 × 340 | 103 | 0.43–0.85 m | 1.3 m | 9 |
# | Color | Indian Pines | |
---|---|---|---|
Class | No. Labeled Pixels | ||
1 | Alfalfa | 46 | |
2 | Corn-Notill | 1428 | |
3 | Corn-Mintill | 830 | |
4 | Corn | 237 | |
5 | Grass-Pasture | 483 | |
6 | Grass-Trees | 730 | |
7 | Grass-Pasture-Mowed | 28 | |
8 | Hay-Windrowed | 478 | |
9 | Oats | 20 | |
10 | Soybean-Notill | 972 | |
11 | Soybean-Mintill | 2455 | |
12 | Soybean-Clean | 593 | |
13 | Wheat | 205 | |
14 | Woods | 1265 | |
15 | Bldg-Grass-Trees-Drives | 386 | |
16 | Stones-Steel-Towers | 93 | |
Total | 10,249 |
# | Color | Pavia University | |
---|---|---|---|
Class | No. Labeled Pixels | ||
1 | Asphalt | 6631 | |
2 | Meadows | 18,649 | |
3 | Gravel | 2099 | |
4 | Trees | 3064 | |
5 | Painted-metal-sheets | 1345 | |
6 | Bare-Soil | 5029 | |
7 | Bitumen | 1330 | |
8 | Self-Blocking-Bricks | 3682 | |
9 | Shadows | 947 | |
Total | 42,776 |
Dataset | Indian Pines | Pavia University | ||||
---|---|---|---|---|---|---|
Method/Metric | OA (%) | AA (%) | k | OA (%) | AA (%) | k |
Classifier (PCA) | 81.15 | 81.30 | 81.59 | 87.48 | 75.91 | 83.23 |
Classifier (SuperPCA) | 87.39 | 89.16 | 85.60 | 88.93 | 82.14 | 85.23 |
FAAL (PCA) | 84.71 | 88.10 | 82.54 | 92.39 | 85.51 | 89.83 |
FAAL (SuperPCA) | 91.41 | 93.20 | 90.20 | 93.47 | 88.97 | 91.34 |
Metric/Method | AL-SV | AL-SV-HSeg | AL-MV | AL-MV-HSeg | AL-MVE-HSeg | FAAL (250) | AL-CNN-MRF (416) | FAAL (300) |
---|---|---|---|---|---|---|---|---|
OA (%) | 61.96 | 81.64 | 51.19 | 80.00 | 87.10 | 91.41 | 92.26 | 93.91 |
AA (%) | 62.16 | 82.52 | 48.64 | 83.14 | 90.13 | 93.20 | 86.54 | 95.16 |
k | 56.30 | 79.11 | 43.67 | 77.07 | 85.34 | 90.20 | - | 93.07 |
1 | 51.22 | 76.77 | 36.71 | 56.18 | 64.59 | 99.29 | 84.17 | 100 |
2 | 33.82 | 72.94 | 19.12 | 63.01 | 84.09 | 82.35 | 91.00 | 88.16 |
3 | 43.21 | 66.82 | 28.62 | 74.83 | 84.95 | 89.81 | 83.64 | 98.52 |
4 | 49.94 | 59.24 | 13.08 | 60.83 | 82.20 | 89.88 | 87.01 | 91.78 |
5 | 71.93 | 82.27 | 66.11 | 88.67 | 92.08 | 87.59 | 91.57 | 90.39 |
6 | 31.99 | 75.26 | 41.49 | 72.81 | 79.00 | 92.14 | 95.18 | 94.62 |
7 | 41.26 | 93.30 | 20.38 | 92.94 | 94.58 | 100 | 89.13 | 100 |
8 | 59.80 | 86.59 | 59.88 | 83.62 | 92.98 | 100 | 98.93 | 100 |
9 | 79.38 | 94.45 | 71.55 | 92.36 | 89.06 | 100 | 16.08 | 100 |
10 | 86.00 | 96.94 | 41.37 | 97.27 | 97.78 | 85.94 | 90.68 | 91.94 |
11 | 87.21 | 99.27 | 92.43 | 99.18 | 99.71 | 97.44 | 94.70 | 96.43 |
12 | 77.25 | 95.56 | 35.58 | 75.94 | 94.36 | 78.91 | 91.51 | 83.24 |
13 | 82.24 | 98.55 | 48.71 | 98.14 | 99.56 | 98.88 | 99.25 | 98.90 |
14 | 85.49 | 91.13 | 63.54 | 96.21 | 98.63 | 95.20 | 95.73 | 96.33 |
15 | 29.75 | 88.99 | 59.41 | 90.63 | 95.65 | 98.38 | 83.99 | 94.86 |
16 | 84.05 | 89.74 | 80.22 | 87.56 | 92.87 | 95.37 | 92.08 | 97.33 |
Metric/Method | AL-SV-HSeg | FAAL(250) | AL-CNN-MRF (321) | FAAL(320) |
---|---|---|---|---|
OA (%) | 92.23 | 93.47 | 97.43 | 97.14 |
AA (%) | 92.66 | 88.97 | 94.80 | 95.07 |
k | 90.05 | 91.34 | - | 96.20 |
1 | 90.00 | 92.01 | 98.18 | 95.29 |
2 | 93.59 | 97.36 | 99.82 | 99.96 |
3 | 86.21 | 97.93 | 78.46 | 99.01 |
4 | 92.65 | 70.62 | 93.86 | 84.89 |
5 | 97.62 | 77.32 | 99.05 | 94.14 |
6 | 90.34 | 99.09 | 98.46 | 99.77 |
7 | 95.59 | 99.69 | 94.77 | 99.23 |
8 | 90.72 | 96.26 | 97.71 | 96.32 |
9 | 97.25 | 70.76 | 92.91 | 94.06 |
Metric/Heuristic | Random Sampling | Least Confidence | Entropy Sampling | BALD | FAAL |
---|---|---|---|---|---|
OA (%) | 88.61 | 89.11 | 89.30 | 90.41 | 91.41 |
AA (%) | 91.55 | 91.58 | 92.28 | 92.47 | 93.20 |
k | 87.01 | 87.60 | 87.81 | 89.07 | 90.20 |
1 | 98.60 | 100 | 100 | 100 | 99.29 |
2 | 89.09 | 83.95 | 84.69 | 83.92 | 82.35 |
3 | 77.16 | 81.44 | 88.20 | 94.28 | 89.81 |
4 | 90.00 | 91.43 | 93.33 | 93.33 | 89.88 |
5 | 89.72 | 73.21 | 79.10 | 79.91 | 87.59 |
6 | 92.87 | 96.98 | 96.93 | 95.25 | 92.14 |
7 | 100 | 100 | 100 | 98.33 | 100 |
8 | 100 | 99.92 | 100 | 99.68 | 100 |
9 | 100 | 100 | 100 | 100 | 100 |
10 | 81.19 | 88.41 | 81.48 | 80.04 | 85.94 |
11 | 93.77 | 91.03 | 91.15 | 95.11 | 97.44 |
12 | 74.81 | 77.61 | 79.20 | 72.65 | 78.91 |
13 | 97.75 | 97.57 | 97.75 | 94.94 | 98.88 |
14 | 85.71 | 95.34 | 93.50 | 95.55 | 95.20 |
15 | 95.88 | 94.22 | 92.94 | 98.14 | 98.38 |
16 | 98.38 | 94.24 | 98.15 | 98.35 | 95.37 |
Metric/Heuristic | Random Sampling | Least Confidence | Entropy Sampling | BALD | FAAL |
---|---|---|---|---|---|
OA (%) | 90.16 | 90.36 | 91.08 | 92.09 | 93.47 |
AA (%) | 83.69 | 85.28 | 86.35 | 88.24 | 88.97 |
k | 86.91 | 87.19 | 88.17 | 89.51 | 91.34 |
1 | 83.90 | 75.08 | 79.53 | 82.31 | 92.01 |
2 | 96.06 | 98.22 | 98.05 | 97.49 | 97.36 |
3 | 98.58 | 98.85 | 98.27 | 99.44 | 97.93 |
4 | 64.56 | 70.11 | 66.23 | 61.68 | 70.62 |
5 | 67.53 | 72.81 | 80.38 | 83.38 | 77.32 |
6 | 94.68 | 91.96 | 93.89 | 99.57 | 99.09 |
7 | 99.53 | 99.73 | 99.33 | 99.61 | 99.69 |
8 | 96.77 | 98.08 | 97.13 | 99.08 | 96.26 |
9 | 51.63 | 62.66 | 64.38 | 71.74 | 70.76 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, G.; Ren, P. Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning. Remote Sens. 2020, 12, 3879. https://doi.org/10.3390/rs12233879
Wang G, Ren P. Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning. Remote Sensing. 2020; 12(23):3879. https://doi.org/10.3390/rs12233879
Chicago/Turabian StyleWang, Guangxing, and Peng Ren. 2020. "Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning" Remote Sensing 12, no. 23: 3879. https://doi.org/10.3390/rs12233879
APA StyleWang, G., & Ren, P. (2020). Hyperspectral Image Classification with Feature-Oriented Adversarial Active Learning. Remote Sensing, 12(23), 3879. https://doi.org/10.3390/rs12233879