Semi-Supervised Deep Learning Classification for Hyperspectral Image Based on Dual-Strategy Sample Selection
"> Figure 1
<p>The structures of (<b>a</b>) supervised and (<b>b</b>) semi-supervised spectral–spatial deep learning for HSI classification.</p> "> Figure 2
<p>Overview of semi-supervised deep learning framework for hyperspectral image (HSI) classification. The training of the framework mainly two iterative steps: (1) Training the spectral- and spatial- models over the respective data based on the labeled pool (indicated as solid lines); (2) applying each model to predict the unlabeled HSI data and use respective sample selection strategy to select the most confident samples for the other (indicated as dashed lines. See details in the text). After all iterations of co-training are completed, the classification results of the test dataset which obtained through two training networks were fused, and then the label of the test dataset was obtained (indicated as solid black lines).</p> "> Figure 3
<p>A residual network for spatial-ResNet model, which contains two “bottleneck” building blocks, and each building block has one shortcut connection. The number on each building block is the number of output feature map. <span class="html-italic">F(x)</span> is the residual mapping and <span class="html-italic">x</span> is the identity mapping, for each residual function, we use a stack of 3 layers. The original mapping is represented as <span class="html-italic">F(x)</span> + <span class="html-italic">x</span>.</p> "> Figure 4
<p>The process of sample selection mechanism based on spatial feature.</p> "> Figure 5
<p>AVIRIS Indian Pines image. (<b>a</b>) False-color image. (<b>b</b>) Reference image. Black area denotes unlabeled pixels.</p> "> Figure 6
<p>ROSIS-03 University of Pavia image. (<b>a</b>) True color image. (<b>b</b>) Reference image. Black area denotes unlabeled pixels.</p> "> Figure 7
<p>AVIRIS Salinas Valley image. (<b>a</b>) True color image. (<b>b</b>) Reference image. Black area denotes unlabeled pixels.</p> "> Figure 8
<p>Hyperion image. (<b>a</b>) True color image. (<b>b</b>) Reference image. Black area denotes unlabeled pixels.</p> "> Figure 9
<p>Classification results of AVIRIS Indian Pines. (<b>a</b>) False-color image. (<b>b</b>) Reference image (<b>c</b>) Label = 5, OA = 92.35%. (<b>d</b>) Label = 10, OA = 96.34%. (<b>e</b>) Label = 15, OA = 98.38%. (<b>f</b>) Label = 20, OA = 98.2%.</p> "> Figure 10
<p>Classification results of ROSIS Pavia University data. (<b>a</b>) True color image. (<b>b</b>) Reference image (<b>c</b>) Label = 5, OA = 89.7%. (<b>d</b>) Label = 10, OA = 97.13%. (<b>e</b>) Label = 15, OA = 98.9%. (<b>f</b>) Label = 20, OA = 99.40%.</p> "> Figure 11
<p>Classification results of AVIRIS Salinas Valley. (<b>a</b>) True color image. (<b>b</b>) Reference image (<b>c</b>) Label = 5, OA = 95.69%. (<b>d</b>) Label = 10, OA = 98.09%. (<b>e</b>) Label = 15, OA = 98.59%. (<b>f</b>) Label = 20, OA = 99.08%.</p> "> Figure 12
<p>Classification results of Hyperion data with 5, 10 and 15 initial labeled samples per class. (<b>a</b>) True color image. (<b>b</b>) Label = 5, OA = 82.56%. (<b>c</b>) Label = 10, OA = 91.89%. (<b>d</b>) Label = 15, OA = 93.97%.</p> "> Figure 13
<p>Influence of network hyper-parameters. (<b>a</b>) Overall accuracies of different kernel numbers. (<b>b</b>) Overall accuracies of different spatial size.</p> "> Figure 14
<p>Performance analysis of effect of the number of iterations of co-training. (<b>a</b>) Classification accuracies of different iterations. (<b>b</b>) Time cost of the network training progress of different iterations of co-training. (<b>c</b>) Time cost of the sample selection progress of different iterations of co-training.</p> ">
Abstract
:1. Introduction
2. Method
2.1. Overview
2.2. Networks Architectures Based on Spectral and Spatial Features
2.3. Dual-Strategy Sample Selection Co-Training
2.3.1. New Sample Selection Mechanism Based on Spectral Feature
2.3.2. Sample Selection Mechanism Based on Spatial Feature
2.3.3. Co-Training
3. Experimental Results and Analyses
3.1. Dataset Description and Experimental Settings
3.2. Experimental Results on the AVIRIS Indian Pines Dataset
3.3. Experimental Results on the ROSIS-03 University of Pavia Dataset
3.4. Experimental Results on the AVIRIS Salinas Valley Dataset
3.5. Experimental Results on Hyperion Dataset
4. Discussion
4.1. Influence of Network Hyper-Parameters
4.2. Effect of the Number of Iterations in Co-Training
4.3. Sample Selection Mechanism Analysis in Co-Training
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Proc. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
- Agapiou, A.; Hadjimitsis, D.G.; Alexakis, D.D. Evaluation of broadband and narrowband vegetation indices for the identification of archaeological crop marks. Remote Sens. 2012, 4, 3892–3919. [Google Scholar] [CrossRef]
- Yokoya, N.; Chan, J.C.-W.; Segl, K. Potential of resolution-enhanced hyperspectral data for mineral mapping using simulated EnMAP and Sentinel-2 images. Remote Sens. 2016, 8, 172. [Google Scholar] [CrossRef]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1–12. [Google Scholar] [CrossRef]
- Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral-spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Yang, J.; Zhao, Y.Q.; Chan, J.C.-W. Learning and transferring deep joint spectral-spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
- Ma, X.; Wang, H.; Geng, J. Spectral–spatial classification of hyperspectral image based on deep auto-encoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4073–4085. [Google Scholar] [CrossRef]
- Ma, X.; Wang, H.; Wang, J. Semisupervised classification for hyperspectral image based on multi-decision labeling and deep feature learning. ISPRS J. Photogramm. Remote Sens. 2016, 120, 99–107. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Tan, K.; Zhu, J.; Du, Q.; Wu, L.; Du, P. A novel tri-training technique for semi-supervised classification of hyperspectral images based on diversity measurement. Remote Sens. 2016, 8, 749. [Google Scholar] [CrossRef]
- Romaszewski, M.; Głomb, P.; Cholewa, M. Semi-supervised hyperspectral classification from a small number of training samples using a co-training approach. ISPRS J. Photogramm. Remote Sens. 2016, 121, 60–76. [Google Scholar] [CrossRef]
- Zhang, X.; Song, Q.; Liu, R.; Wang, W.; Jiao, L. Modified co-training with spectral and spatial views for semisupervised hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2044–2055. [Google Scholar] [CrossRef]
- Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the 11th Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; pp. 92–100. [Google Scholar]
- Appice, A.; Guccione, P.; Malerba, D. A novel spectral-spatial co-training algorithm for the transductive classification of hyperspectral imagery data. Pattern Recognit. 2017, 63, 229–245. [Google Scholar] [CrossRef]
- Tan, K.; Li, E.; Du, Q.; Du, P. An efficient semi-supervised classification approach for hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2014, 97, 36–45. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zhong, Z.; Li, J.; Ma, L.F.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the IEEE International Geoscience & Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Eran, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; pp. 21–37. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q.; Laurens, V.D.M. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Hu, J.; Lu, J.; Tan, Y.P. Discriminative deep metric learning for face verification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1875–1882. [Google Scholar]
- Zhang, H.K.; Li, Y.; Zhang, Y.Z.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, Informedness, Markedness and Correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
Class | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
No. | Number of Samples | 5 | 10 | 15 | 20 |
1 | 46 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
2 | 1428 | 82.99 ± 6.41 | 94.56 ± 2.12 | 97.27 ± 1.07 | 97.74 ± 0.52 |
3 | 830 | 86.16 ± 8.97 | 94.07 ± 0.99 | 97.57 ± 1.39 | 96.69 ± 1.14 |
4 | 237 | 99.57 ± 0.67 | 98.40 ± 2.57 | 99.48 ± 0.72 | 95.08 ± 3.34 |
5 | 483 | 95.68 ± 1.85 | 98.87 ± 1.17 | 99.22 ± 0.66 | 99.35 ± 0.41 |
6 | 730 | 98.83 ± 0.95 | 99.64 ± 0.76 | 99.86 ± 0.22 | 99.74 ± 0.27 |
7 | 28 | 97.83 ± 3.64 | 99.31 ± 1.97 | 100.00 ± 0.00 | 100.00 ± 0.00 |
8 | 478 | 99.83 ± 0.25 | 99.97 ± 0.07 | 100.00 ± 0.00 | 100.00 ± 0.00 |
9 | 20 | 95.56 ± 5.44 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
10 | 972 | 87.94 ± 3.53 | 96.09 ± 2.11 | 98.38 ± 0.92 | 98.67 ± 1.34 |
11 | 2455 | 76.40 ± 11.03 | 92.07 ± 2.17 | 96.26 ± 1.64 | 97.26 ± 0.46 |
12 | 593 | 93.31 ± 4.47 | 95.22 ± 2.36 | 98.82 ± 0.70 | 98.14 ± 1.52 |
13 | 205 | 99.42 ± 0.80 | 84.93 ± 13.47 | 92.98 ± 6.97 | 96.04 ± 3.24 |
14 | 1265 | 96.93 ± 3.67 | 95.45 ± 2.03 | 97.41 ± 0.74 | 97.20 ± 0.69 |
15 | 386 | 96.33 ± 2.25 | 96.44 ± 2.54 | 97.71 ± 1.53 | 93.03 ± 6.84 |
16 | 93 | 98.86 ± 1.44 | 99.55 ± 0.90 | 97.44 ± 3.89 | 99.32 ± 1.15 |
OA | 88.42 ± 3.07 | 95.07 ± 1.02 | 97.66 ± 0.63 | 97.66 ± 0.85 | |
AA | 94.11 ± 1.12 | 96.54 ± 0.88 | 98.28 ± 0.71 | 98.39 ± 0.49 | |
86.90 ± 3.44 | 94.39 ± 1.16 | 97.34 ± 0.72 | 97.36 ± 0.40 | ||
F1-measure | 90.09 ± 0.01 | 95.66 ± 0.01 | 97.87 ± 0.01 | 97.27 ± 0.01 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
CNN | OA | 47.33 ± 4.19 | 64.09 ± 2.76 | 68.90 ± 1.73 | 79.62 ± 1.06 |
AA | 57.71 ± 2.74 | 78.5 ± 2.61 | 83.38 ± 1.25 | 88.73 ± 0.89 | |
41.95 ± 4.45 | 59.77 ± 2.74 | 65.15 ± 1.26 | 79.93 ± 0.93 | ||
CDL-MD-L | OA | 74.85 | 86.46 ± 1.78 | 90.22 | 91.54 |
AA | 72.98 | 79.30 ± 1.66 | 85.12 | 88.02 | |
74.13 | 84.63 ± 2.00 | 88.94 | 91.06 | ||
Co-DC-CNN | OA | 85.81 ± 3.33 | 92.31 ± 1.23 | 95.13 ± 0.79 | 94.89 ± 1.02 |
AA | 91.58 ± 1.32 | 93.53 ± 1.09 | 95.37 ± 0.87 | 95.40 ± 0.79 | |
84.64 ± 3.39 | 91.40 ± 1.54 | 94.88 ± 0.95 | 94.01 ± 0.74 | ||
Proposed | OA | 88.42 ± 3.07 | 95.07 ± 1.02 | 97.66 ± 0.63 | 97.66 ± 0.85 |
AA | 94.11 ± 1.12 | 96.54 ± 0.88 | 98.28 ± 0.71 | 98.39 ± 0.49 | |
86.90 ± 3.44 | 94.39 ± 1.16 | 97.34 ± 0.72 | 97.36 ± 0.40 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
PNGrow | OA | 82.11 ± 2.69 | 89.18 ± 1.54 | 91.80 ± 2.07 | 93.22 ± 1.10 |
AA | 88.60 ± 1.2 | 92.58 ± 1.1 | 94.26 ± 1.1 | 94.96 ± 0.7 | |
79.74 ± 3.0 | 87.70 ± 1.7 | 90.66 ± 2.3 | 92.27 ± 1.2 | ||
TT_AL_MSH_MKE | OA | 71.05 ± 7.76 | 79.36 ± 6.95 | 83.44 ± 5.45 | n/d |
AA | 80.48 ± 5.86 | 86.45 ± 4.93 | 89.38 ± 3.55 | n/d | |
67.88 ± 8.17 | 77.02 ± 7.46 | 81.45 ± 5.91 | n/d | ||
S2CoTraC | OA | 69.15 ± 2.25 | 79.18 ± 0.56 | 90.40 ± 1.65 | 93.53 ± 0.69 |
AA | 82.96 ± 2.64 | 88.93 ± 1.38 | 94.83 ± 0.94 | 94.20 ± 0.41 | |
65.97 ± 2.30 | 76.69 ± 0.53 | 89.10 ± 1.85 | 92.61 ± 0.78 | ||
Proposed | OA | 88.42 ± 3.07 | 95.07 ± 1.02 | 97.66 ± 0.63 | 97.66 ± 0.85 |
AA | 94.11 ± 1.12 | 96.54 ± 0.88 | 98.28 ± 0.71 | 98.39 ± 0.49 | |
86.90 ± 3.44 | 94.39 ± 1.16 | 97.34 ± 0.72 | 97.36 ± 0.40 |
Class | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
No. | Number of Samples | 5 | 10 | 15 | 20 |
1 | 6631 | 78.96 ± 9.94 | 93.40 ± 3.26 | 97.31 ± 0.66 | 98.51 ± 0.72 |
2 | 18,649 | 83.10 ± 5.93 | 96.40 ± 2.28 | 98.88 ± 0.55 | 99.28 ± 0.53 |
3 | 2099 | 89.14 ± 3.76 | 97.57 ± 1.52 | 98.55 ± 0.66 | 99.18 ± 0.49 |
4 | 3064 | 97.49 ± 0.35 | 97.54 ± 0.75 | 98.03 ± 0.75 | 98.71 ± 0.42 |
5 | 1345 | 100.00 ± 0.00 | 100.00 ± 0.00 | 99.97 ± 0.06 | 99.99 ± 0.03 |
6 | 5029 | 90.17 ± 5.20 | 97.58 ± 1.34 | 99.74 ± 0.28 | 99.96 ± 0.04 |
7 | 1330 | 98.68 ± 0.97 | 98.35 ± 0.68 | 99.93 ± 0.09 | 99.90 ± 0.11 |
8 | 3682 | 91.33 ± 2.27 | 93.15 ± 4.35 | 97.65 ± 0.69 | 98.27 ± 0.74 |
9 | 947 | 99.05 ± 0.66 | 99.08 ± 1.02 | 99.95 ± 0.08 | 99.87 ± 0.25 |
OA | 86.69 ± 2.94 | 96.16 ± 1.05 | 98.64 ± 0.21 | 99.16 ± 0.24 | |
AA | 91.99 ± 1.04 | 97.01 ± 0.66 | 98.89 ± 0.12 | 99.30 ± 0.14 | |
82.94 ± 3.57 | 94.95 ± 1.36 | 98.21 ± 0.27 | 98.89 ± 0.32 | ||
F1-measure | 86.19 ± 0.01 | 96.23 ± 0.01 | 97.76 ± 0.01 | 98.60 ± 0.00 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
CNN | OA | 55.40 ± 3.89 | 69.02 ± 2.26 | 72.38 ± 1.26 | 79.34 ± 0.51 |
AA | 55.89 ± 3.31 | 63.13 ± 1.77 | 69.79 ± 1.37 | 77.52 ± 0.58 | |
44.16 ± 3.84 | 59.86 ± 1.95 | 64.62 ± 1.48 | 73.84 ± 0.43 | ||
CDL-MD-L | OA | 72.85 | 82.61 ± 2.95 | 88.04 | 91.89 |
AA | 78.58 | 85.10 ± 2.45 | 89.12 | 91.32 | |
63.71 | 0.7807 ± 0.03 | 83.64 | 88.42 | ||
Co-DC-CNN | OA | 83.47 ± 3.01 | 94.99 ± 1.49 | 95.33 ± 0.32 | 97.45 ± 0.45 |
AA | 88.52 ± 1.39 | 95.51 ± 1.13 | 96.68 ± 0.27 | 98.72 ± 0.23 | |
81.78 ± 3.81 | 93.63 ± 1.31 | 95.72 ± 0.49 | 98.67 ± 0.40 | ||
Proposed | OA | 86.69 ± 2.94 | 96.16 ± 1.05 | 98.64 ± 0.21 | 99.16 ± 0.24 |
AA | 91.99 ± 1.04 | 97.01 ± 0.66 | 98.89 ± 0.12 | 99.30 ± 0.14 | |
82.94 ± 3.57 | 94.95 ± 1.36 | 98.21 ± 0.27 | 98.89 ± 0.32 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
PNGrow | OA | 88.11 ± 2.87 | 93.85 ± 2.23 | 93.77 ± 3.42 | 96.90 ± 0.90 |
AA | 91.53 ± 1.3 | 95.32 ± 0.6 | 95.96 ± 1.1 | 97.47 ± 0.4 | |
84.64 ± 3.5 | 91.95 ± 2.8 | 91.89 ± 4.3 | 95.90 ± 1.2 | ||
TT-AL-MSH-MKE | OA | 79.04 ± 3.95 | 86.00 ± 3.04 | 90.20 ± 2.51 | n/d |
AA | 85.99 ± 3.84 | 89.80 ± 2.74 | 92.16 ± 1.92 | n/d | |
85.99 ± 3.84 | 82.05 ± 3.87 | 87.24 ± 3.20 | n/d | ||
S2CoTraC | OA | 50.76 ± 1.68 | 80.75 ± 0.35 | 82.87 ± 1.42 | 93.67 ± 1.51 |
AA | 62.37 ± 1.96 | 82.37 ± 1.29 | 90.06 ± 0.84 | 92.45 ± 0.30 | |
42.62 ± 1.81 | 75.11 ± 1.21 | 78.56 ± 1.73 | 91.69 ± 1.88 | ||
Proposed | OA | 86.69 ± 2.94 | 96.16 ± 1.05 | 98.64 ± 0.21 | 99.16 ± 0.24 |
AA | 91.99 ± 1.04 | 97.01 ± 0.66 | 98.89 ± 0.12 | 99.30 ± 0.14 | |
82.94 ± 3.57 | 94.95 ± 1.36 | 98.21 ± 0.27 | 98.89 ± 0.32 |
Class | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
No. | Number of Samples | 5 | 10 | 15 | 20 |
1 | 2009 | 99.80 ± 0.15 | 99.68 ± 0.75 | 99.89 ± 0.26 | 99.92 ± 0.07 |
2 | 3726 | 98.80 ± 1.80 | 99.96 ± 0.09 | 99.76 ± 0.64 | 100.00 ± 0.00 |
3 | 1976 | 99.98 ± 0.04 | 99.99 ± 0.02 | 100.00 ± 0.00 | 100.00 ± 0.00 |
4 | 1394 | 99.89 ± 0.08 | 99.98 ± 0.04 | 100.00 ± 0.00 | 100.00 ± 0.00 |
5 | 2678 | 99.15 ± 0.39 | 99.21 ± 0.33 | 99.45 ± 0.19 | 99.69 ± 0.29 |
6 | 3959 | 99.91 ± 0.23 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
7 | 3579 | 99.62 ± 0.35 | 99.92 ± 0.13 | 99.99 ± 0.02 | 100.00 ± 0.00 |
8 | 11,271 | 84.79 ± 5.97 | 91.39 ± 3.77 | 94.11 ± 1.77 | 95.45 ± 0.85 |
9 | 6203 | 99.38 ± 0.46 | 99.75 ± 0.16 | 99.84 ± 0.12 | 99.89 ± 0.12 |
10 | 3278 | 98.46 ± 0.61 | 99.34 ± 0.27 | 99.62 ± 0.25 | 99.35 ± 0.26 |
11 | 1068 | 99.87 ± 0.16 | 99.91 ± 0.09 | 99.91 ± 0.11 | 99.81 ± 0.21 |
12 | 1927 | 99.98 ± 0.03 | 99.76 ± 0.41 | 100.00 ± 0.00 | 100.00 ± 0.00 |
13 | 916 | 99.73 ± 0.39 | 99.85 ± 0.17 | 99.91 ± 0.10 | 99.80 ± 0.22 |
14 | 1070 | 98.51 ± 1.54 | 99.51 ± 0.27 | 99.80 ± 0.17 | 99.84 ± 0.18 |
15 | 7268 | 78.20 ± 13.77 | 94.55 ± 2.20 | 96.77 ± 3.88 | 98.42 ± 0.80 |
16 | 1807 | 99.10 ± 0.52 | 99.74 ± 0.35 | 99.83 ± 0.29 | 99.90 ± 0.17 |
OA | 93.50 ± 1.40 | 97.31 ± 0.60 | 98.23 ± 0.39 | 98.75 ± 0.22 | |
AA | 97.20 ± 0.62 | 98.91 ± 0.18 | 99.30 ± 0.20 | 99.50 ± 0.10 | |
92.77 ± 1.57 | 97.01 ± 0.66 | 98.03 ± 0.43 | 98.61 ± 0.24 | ||
F1-measure | 96.11 ± 0.01 | 98.53 ± 0.01 | 99.03 ± 0.01 | 99.46 ± 0.01 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
CNN | OA | 75.96 ± 2.48 | 76.33 ± 1.24 | 86.89 ± 0.98 | 87.67 ± 0.33 |
AA | 80.19 ± 2.19 | 85.91 ± 1.75 | 92.82 ± 0.87 | 94.4 ± 0.48 | |
73.26 ± 2.51 | 73.79 ± 1.51 | 85.46 ± 0.93 | 86.38 ± 0.31 | ||
Co-DC-CNN | OA | 90.18 ± 1.59 | 94.45 ± 1.12 | 95.16 ± 0.34 | 95.72 ± 0.29 |
AA | 93.63 ± 1.22 | 95.60 ± 0.68 | 96.54 ± 0.31 | 96.68 ± 0.15 | |
89.87 ± 1.21 | 94.64 ± 1.03 | 94.99 ± 0.54 | 95.70 ± 0.55 | ||
Proposed | OA | 93.50 ± 1.40 | 97.31 ± 0.60 | 98.23 ± 0.39 | 98.75 ± 0.22 |
AA | 97.20 ± 0.62 | 98.91 ± 0.18 | 99.30 ± 0.20 | 99.50 ± 0.10 | |
92.77 ± 1.57 | 97.01 ± 0.66 | 98.03 ± 0.43 | 98.61 ± 0.24 |
Algorithm | Labeled Samples Per Class | ||||
---|---|---|---|---|---|
5 | 10 | 15 | 20 | ||
PNGrow | OA | 95.35 ± 1.3 | 97.36 ± 0.5 | 98.30 ± 0.4 | 98.61 ± 0.2 |
AA | 96.48 ± 1.0 | 98.13 ± 0.4 | 98.60 ± 0.3 | 98.75 ± 0.2 | |
94.83 ± 1.5 | 97.06 ± 0.5 | 98.11 ± 0.5 | 98.45 ± 0.2 | ||
TT-AL-MSH-MKE | OA | 89.32 ± 2.02 | 90.72 ± 1.38 | 92.34 ± 1.00 | n/d |
AA | 93.88 ± 1.11 | 94.79 ± 0.77 | 95.67 ± 0.49 | n/d | |
88.14 ± 3.84 | 89.68 ± 1.53 | 91.48 ± 1.11 | n/d | ||
S2CoTraC | OA | 77.46 ± 1.06 | 82.22 ± 0.85 | 95.82 ± 1.39 | 94.62 ± 1.17 |
AA | 89.14 ± 2.16 | 92.88 ± 2.04 | 98.59 ± 0.34 | 98.00 ± 0.66 | |
75.61 ± 1.21 | 80.51 ± 0.91 | 95.36 ± 1.54 | 94.02 ± 1.31 | ||
Proposed | OA | 93.50 ± 1.40 | 97.31 ± 0.60 | 98.23 ± 0.39 | 98.75 ± 0.22 |
AA | 97.20 ± 0.62 | 98.91 ± 0.18 | 99.30 ± 0.20 | 99.50 ± 0.10 | |
92.77 ± 1.57 | 97.01 ± 0.66 | 98.03 ± 0.43 | 98.61 ± 0.24 |
Class | Labeled Samples Per Class | |||
---|---|---|---|---|
No. | Number of Samples | 5 | 10 | 15 |
1 | 24 | 86.84 ± 11.89 | 89.80 ± 9.09 | 91.11 ± 14.49 |
2 | 61 | 68.45 ± 14.49 | 84.87 ± 9.72 | 89.13 ± 2.66 |
3 | 54 | 62.92 ± 28.32 | 77.92 ± 2.16 | 86.67 ± 4.21 |
4 | 51 | 79.06 ± 35.51 | 90.86 ± 5.36 | 98.33 ± 1.52 |
5 | 32 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
6 | 148 | 63.29 ± 13.17 | 76.71 ± 2.76 | 85.71 ± 1.68 |
7 | 49 | 86.36 ± 7.04 | 100.00 ± 0.00 | 100.00 ± 0.00 |
8 | 39 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
9 | 82 | 83.34 ± 11.18 | 100.00 ± 0.00 | 100.00 ± 0.00 |
10 | 67 | 90.59 ± 17.74 | 99.25 ± 1.38 | 100.00 ± 0.00 |
11 | 20 | 72.23 ± 13.61 | 91.43 ± 12.15 | 100.00 ± 0.00 |
12 | 53 | 68.75 ± 12.29 | 100.00 ± 0.00 | 100.00 ± 0.00 |
13 | 79 | 64.64 ± 12.66 | 82.19 ± 8.99 | 88.44 ± 1.408 |
OA | 77.04 ± 4.48 | 89.17 ± 1.68 | 93.26 ± 0.83 | |
AA | 80.71 ± 4.01 | 91.80 ± 1.48 | 95.34 ± 0.89 | |
74.66 ± 4.90 | 87.92 ± 1.85 | 92.40 ± 0.93 | ||
F1-measure | 78.30 ± 0.05 | 89.77 ± 0.03 | 94.57 ± 0.00 |
Algorithm | Labeled Samples Per Class | |||
---|---|---|---|---|
5 | 10 | 15 | ||
CNN | OA | 39.77 ± 4.84 | 74.56 ± 3.14 | 72.16 ± 2.26 |
AA | 37.70 ± 4.61 | 77.78 ± 2.98 | 78.72 ± 1.99 | |
33.09 ± 5.22 | 71.71 ± 3.27 | 68.89 ± 2.17 | ||
Co-DC-CNN | OA | 72.58 ± 4.27 | 82.47 ± 2.14 | 90.84 ± 1.30 |
AA | 75.63 ± 4.09 | 84.70 ± 1.95 | 92.63 ± 1.28 | |
70.44 ± 4.95 | 81.99 ± 1.89 | 91.87 ± 1.34 | ||
S2CoTraC | OA | 61.35 ± 1.06 | 79.54 ± 0.85 | 84.49 ± 1.39 |
AA | 47.85 ± 2.16 | 70.51 ± 2.04 | 79.59 ± 0.34 | |
56.50 ± 1.21 | 77.26 ± 0.91 | 82.71 ± 1.54 | ||
Proposed | OA | 77.04 ± 4.48 | 89.17 ± 1.68 | 93.26 ± 0.83 |
AA | 80.71 ± 4.01 | 91.80 ± 1.48 | 95.34 ± 0.89 | |
74.66 ± 4.90 | 87.92 ± 1.85 | 92.40 ± 0.93 |
Iteration 1 | Iteration 2 | Iteration 3 | Iteration 4 | |
---|---|---|---|---|
training samples (spectral) | 45 ± 0.00 | 108 ± 3 | 537 ± 8 | 1136 ± 8 |
training samples (spatial) | 45 ± 0.00 | 215 ± 1 | 462 ± 12 | 1439 ± 43 |
selected samples (spectral) | 170 ± 1 (99.19%) | 248 ± 13 (99.45%) | 977 ± 55 (99.88%) | 1167 ± 47 (99.98%) |
selected samples (spatial) | 63 ± 3 (100.00%) | 429 ± 11 (99.83%) | 598 ± 16 (99.96%) | 1459 ± 76 (99.91%) |
OA (%) | 54.2 ± 2.12 | 67.81 ± 4.85 | 86.69 ± 2.94 | 97.58 ± 0.13 |
Iteration 1 | Iteration 2 | Iteration 3 | Iteration 4 | |
---|---|---|---|---|
training samples (spectral) | 80 ± 0.00 | 462 ± 66 | 1510 ± 19 | 3659 ± 149 |
training samples (spatial) | 80 ± 0.00 | 557 ± 18 | 1473 ± 13 | 3936 ± 105 |
selected samples (spectral) | 477 ± 18 (99.26%) | 916 ± 31 (99.52%) | 2463 ± 115 (99.48%) | 3552 ± 102 (99.84%) |
selected samples (spatial) | 382 ± 66 (100.00%) | 1048 ± 47 (99.79%) | 2149 ± 130 (99.93%) | 3816 ± 80 (99.87%) |
OA (%) | 67.02 ± 2.29 | 80.44 ± 1.87 | 93.5 ± 1.42 | 98.32 ± 0.52 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fang, B.; Li, Y.; Zhang, H.; Chan, J.C.-W. Semi-Supervised Deep Learning Classification for Hyperspectral Image Based on Dual-Strategy Sample Selection. Remote Sens. 2018, 10, 574. https://doi.org/10.3390/rs10040574
Fang B, Li Y, Zhang H, Chan JC-W. Semi-Supervised Deep Learning Classification for Hyperspectral Image Based on Dual-Strategy Sample Selection. Remote Sensing. 2018; 10(4):574. https://doi.org/10.3390/rs10040574
Chicago/Turabian StyleFang, Bei, Ying Li, Haokui Zhang, and Jonathan Cheung-Wai Chan. 2018. "Semi-Supervised Deep Learning Classification for Hyperspectral Image Based on Dual-Strategy Sample Selection" Remote Sensing 10, no. 4: 574. https://doi.org/10.3390/rs10040574