Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification
<p>Flowchart of the proposed method.</p> "> Figure 2
<p>Schematic diagram of the 3DBF.</p> "> Figure 3
<p>The general GANs architectures.</p> "> Figure 4
<p>Schematic illustration of the procedure for unsupervised/semi-supervised learning based on GANs.</p> "> Figure 5
<p>A visual illustration of the semi-supervised hyperspectral classification method by GANs.</p> "> Figure 6
<p>Indian Pines data. (<b>a</b>) Three-band false color composite and (<b>b</b>) ground truth data with 16 classes.</p> "> Figure 7
<p>University of Pavia data. (<b>a</b>) Three-band false color composite and (<b>b</b>) ground truth data with 9 classes.</p> "> Figure 8
<p>Salinas data. (<b>a</b>) Three-band false color composite and (<b>b</b>) ground truth data with 16 classes.</p> "> Figure 9
<p>The spectral profiles of the pixel (18,6) from the original Indian Pines data, the 2DBF and the 3DBF.</p> "> Figure 10
<p>Spatial scenes of the 4th, 22nd, 34th bands. (<b>a</b>,<b>d</b>,<b>g</b>) are chosen from the original Indian Pines data, (<b>b</b>,<b>e</b>,<b>h</b>) are obtained by the 2DBF, and (<b>c</b>,<b>f</b>,<b>i</b>) are obtained by the 3DBF.</p> "> Figure 11
<p>Classification maps of the Indian Pines data with 5 samples per class.</p> "> Figure 12
<p>Classification maps of the University of Pavia data with 5 samples per class.</p> "> Figure 13
<p>Classification maps of the Salinas data with 5 samples per class.</p> "> Figure 14
<p>The impact of parameters <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>s</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>r</mi> </msub> </semantics> </math> on the OA in (<b>a</b>) Indian Pines data, (<b>b</b>) University of Pavia data and (<b>c</b>) Salinas data.</p> "> Figure 15
<p>The impact of training epoch on the OA in (<b>a</b>) Indian Pines data, (<b>b</b>) University of Pavia data and (<b>c</b>) Salinas data.</p> "> Figure 16
<p>The impact of learning rate on the OA in (<b>a</b>) Indian Pines data, (<b>b</b>) University of Pavia data and (<b>c</b>) Salinas data.</p> "> Figure 17
<p>The impact of the number of labeled training samples per class on the OA in (<b>a</b>) Indian Pines data, (<b>b</b>) University of Pavia data and (<b>c</b>) Salinas data.</p> ">
Abstract
:1. Introduction
- We extract the spectral-spatial features by the 3DBF. Compared to the vector/matrix-based methods, the structural features extracted by the 3DBF can effectively preserve the spectral-spatial information by naturally obeying the 3D form of the HSI and treating the 3D cube as a whole entity.
- We classify the HSI in a semi-supervised manner by the GANs. Compared to the supervised methods, the GANs can utilize both limited training samples and abundant of unlabeled samples. Compared to the non-adversarial networks, the GANs take advantage of the discriminative models to train the generative network based on game theory.
2. Proposed Semi-Supervised Method
2.1. Spectral-Spatial Features Extracted by 3D Bilateral Filter
- Convolve and w with a Gaussian defined on . In this step, and w are “blurred” into and , respectively.
- Obtain by dividing by ;
- Compute the value of at to get the filtered result .
2.2. Semi-Supervised Classification of HSI by Generative Adversarial Networks
2.2.1. Brief of Generative Adversarial Networks
2.2.2. Generative Adversarial Networks for Classification
2.2.3. Hyperspectral Classification Framework Using Generative Adversarial Networks
3. Experimental Section
3.1. Dataset Description
- Indian Pines data: the first dataset was captured by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over the agricultural Indian Pines test site in the Northwestern Indiana, USA, on 12 June 1992. The original image contains 224 spectral bands. After removing 4 bands full of zero and 20 bands affected by noise and water-vapor absorption, 200 bands are left for experiments. It consists of pixels with a spatial resolution of 20 m per pixel, and the spectral coverage ranging from to m. Figure 6 depicts the color composite of the image as well as the ground truth map. There are 16 classes of interest and the number of samples in each class is displayed in Table 1, whose background color denotes different classes of land-covers. Since the number of samples is unbalanced and the spatial resolution is relatively low, it poses a big challenging to the classification task.
- University of Pavia data: the second dataset was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over an urban area surrounding the University of Pavia, northern Italy, on 8 July 2002. The original data contains 115 spectral bands ranging from to m and the size of each band is with a spatial resolution of m per pixel. After removing 12 noisiest channels, 103 bands remained for experiments. The dataset contains 9 classes with various types of land-covers. The color composite image together with the ground truth data are shown in Figure 7. The detailed number of samples in each class is listed in Table 2, whose background color also corresponds to the color in Figure 7.
- Salinas data: the third dataset was collected by the AVIRIS sensor over the Salinas Valley, Southern California, USA, on 8 October 1998. The original dataset contains 224 spectral bands covering from the visible to short-wave infrared light. After discording 20 water absorption bands, 204 bands are preserved for experiments. This dataset consists of pixels with a spatial resolution of m per pixel. The color composite of the image and the ground truth are plotted in Figure 8, which contains 16 classes of interest. The detailed number of classes in each class is shown in Table 3, whose background color represents different classes of land-covers.
3.2. Experimental Setup
3.3. Experimental Results
4. Discussions
4.1. Statistical Significance Analysis of the Results
4.2. Sensitivity Analysis of the Parameters
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Sun, W.; Jiang, M.; Li, W.; Liu, Y. A symmetric sparse representation based band selection method for hyperspectral imagery classification. Remote Sens. 2016, 8, 238. [Google Scholar] [CrossRef]
- Sun, W.; Zhang, D.; Xu, Y.; Tian, L.; Yang, G.; Li, W. A probabilistic weighted archetypal analysis method with earth mover’s distance for endmember extraction from hyperspectral imagery. Remote Sens. 2017, 9, 841. [Google Scholar] [CrossRef]
- Pan, L.; Li, H.C.; Deng, Y.J.; Zhang, F.; Chen, X.D.; Du, Q. Hyperspectral dimensionality reduction by tensor sparse and low-rank graph-based discriminant analysis. Remote Sens. 2017, 9, 452. [Google Scholar] [CrossRef]
- Feng, F.; Li, W.; Du, Q.; Zhang, B. Dimensionality reduction of hyperspectral image with graph-based discriminant analysis considering spectral similarity. Remote Sens. 2017, 9, 323. [Google Scholar] [CrossRef]
- Gao, L.; Zhao, B.; Jia, X.; Liao, W.; Zhang, B. Optimized kernel minimum noise fraction transformation for hyperspectral image classification. Remote Sens. 2017, 9, 548. [Google Scholar] [CrossRef]
- Sun, B.; Kang, X.; Li, S.; Benediktsson, J.A. Random-walker-based collaborative learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 212–222. [Google Scholar] [CrossRef]
- Yang, L.; Wang, M.; Yang, S.; Zhang, R.; Zhang, P. Sparse spatio-spectral LapSVM with semisupervised kernel propagation for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2046–2054. [Google Scholar] [CrossRef]
- Zhong, Y.; Ma, A.; Zhang, L. An adaptive memetic Fuzzy clustering algorithm with spatial information for remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1235–1248. [Google Scholar] [CrossRef]
- Niazmardi, S.; Homayouni, S.; Safari, A. An improved FCM algorithm based on the SVDD for unsupervised hyperspectral data classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 831–839. [Google Scholar] [CrossRef]
- Zhong, Y.; Zhang, L.; Huang, B.; Li, P. An unsupervised artificial immune classifier for multi/hyperspectral remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 420–431. [Google Scholar] [CrossRef]
- Zhu, W.; Chayes, V.; Tiard, A.; Sanchez, S.; Dahlberg, D.; Bertozzi, A.L.; Osher, S.; Zosso, D.; Kuang, D. Unsupervised classification in hyperspectral imagery with nonlocal total variation and primal-dual hybrid gradient algorithm. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2786–2798. [Google Scholar] [CrossRef]
- Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]
- Kuo, B.C.; Ho, H.H.; Li, C.H.; Hung, C.C.; Taur, J.S. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 317–326. [Google Scholar]
- Adep, R.N.; Shetty, A.; Ramesh, H. EXhype: A tool for mineral classification using hyperspectral data. ISPRS J. Photogramm. Remote Sens. 2017, 124, 106–118. [Google Scholar] [CrossRef]
- Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef]
- Zhang, Y.; Du, B.; Zhang, L.; Liu, T. Joint sparse representation and multitask learning for hyperspectral target detection. IEEE Trans. Geosci. Remote Sens. 2017, 55, 894–906. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised hyperspectral image classification using soft sparse multinomial logistic regression. IEEE Geosci. Remote. Sens. Lett. 2013, 10, 318–322. [Google Scholar]
- Chapel, L.; Burger, T.; Courty, N.; Lefevre, S. PerTurbo manifold learning algorithm for weakly labeled hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1070–1078. [Google Scholar] [CrossRef]
- Joachims, T. Transductive inference for text classification using support vector machines. In Proceedings of the Sixteenth International Conference on Machine Learning, San Francisco, CA, USA, 27–30 June 1999; pp. 200–209. [Google Scholar]
- Maulik, U.; Chakraborty, D. Learning with transductive SVM for semisupervised pixel classification of remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2013, 77, 66–78. [Google Scholar] [CrossRef]
- Wang, L.; Hao, S.; Wang, Q.; Wang, Y. Semi-supervised classification for hyperspectral imagery based on spatial-spectral label propagation. ISPRS J. Photogramm. Remote Sens. 2014, 97, 123–137. [Google Scholar] [CrossRef]
- Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
- Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
- Melacci, S.; Belkin, M. Laplacian support vector machines trained in the primal. J. Mach. Learn. Res. 2011, 12, 1149–1184. [Google Scholar]
- De Morsier, F.; Borgeaud, M.; Gass, V.; Thiran, J.P.; Tuia, D. Kernel low-rank and sparse graph for unsupervised and semi-supervised classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3410–3420. [Google Scholar] [CrossRef]
- Yamaguchi, Y.; Faloutsos, C.; Kitagawa, H. Camlp: Confidence-aware modulated label propagation. In Proceedings of the 2016 SIAM International Conference on Data Mining, SIAM, Miami, FlL, USA, 5–7 May 2016; pp. 513–521. [Google Scholar]
- Dopido, I.; Li, J.; Marpu, P.R.; Plaza, A.; Dias, J.M.B.; Benediktsson, J.A. Semisupervised self-learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4032–4044. [Google Scholar] [CrossRef]
- Aydemir, M.S.; Bilgin, G. Semisupervised hyperspectral image classification using small sample sizes. IEEE Geosci. Remote. Sens. Lett. 2017, 14, 621–625. [Google Scholar] [CrossRef]
- Zhang, X.; Song, Q.; Liu, R.; Wang, W.; Jiao, L. Modified co-training with spectral and spatial views for semisupervised hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2044–2055. [Google Scholar] [CrossRef]
- Romaszewski, M.; Głomb, P.; Cholewa, M. Semi-supervised hyperspectral classification from a small number of training samples using a co-training approach. ISPRS J. Photogramm. Remote Sens. 2016, 121, 60–76. [Google Scholar] [CrossRef]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
- Cavallaro, G.; Mura, M.D.; Benediktsson, J.A.; Bruzzone, L. Extended self-dual attribute profiles for the classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1690–1694. [Google Scholar] [CrossRef]
- Bao, R.; Xia, J.; Mura, M.D.; Du, P.; Chanussot, J.; Ren, J. Combining morphological attribute profiles via an ensemble method for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 359–363. [Google Scholar] [CrossRef]
- Jia, S.; Shen, L.; Li, Q. Gabor feature-based collaborative representation for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1118–1129. [Google Scholar]
- He, L.; Li, J.; Plaza, A.; Li, Y. Discriminative low-rank Gabor filtering for spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1381–1395. [Google Scholar] [CrossRef]
- Kang, X.; Li, S.; Benediktsson, J.A. Spectral-spatial hyperspectral image classification with edge-preserving filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
- Demir, B.; Erturk, S. Empirical mode decomposition of hyperspectral images for support vector machine classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4071–4084. [Google Scholar] [CrossRef]
- He, Z.; Wang, Q.; Shen, Y.; Jin, J.; Wang, Y. Multivariate gray model-based BEMD for hyperspectral image classification. IEEE Trans. Instrum. Meas. 2013, 62, 889–904. [Google Scholar] [CrossRef]
- Zabalza, J.; Ren, J.; Zheng, J.; Han, J.; Zhao, H.; Li, S.; Marshall, S. Novel two-dimensional singular spectrum analysis for effective feature extraction and data classification in hyperspectral imaging. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4418–4433. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Chanussot, J.; Benediktsson, J. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef]
- Li, J.; Zhang, H.; Zhang, L. Efficient superpixel-level multitask joint sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
- He, Z.; Liu, L.; Zhou, S.; Shen, Y. Learning group-based sparse and low-rank representation for hyperspectral image classification. Pattern Recognit. 2016, 60, 1041–1056. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J.; Vila-Francés, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
- Gu, Y.; Liu, T.; Jia, X.; Benediktsson, J.A.; Chanussot, J. Nonlinear multiple kernel learning with multiple-structure-element extended morphological profiles for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3235–3247. [Google Scholar] [CrossRef]
- Niazmardi, S.; Safari, A.; Homayouni, S. A novel multiple kernel learning framework for multiple feature classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3734–3743. [Google Scholar] [CrossRef]
- Sun, L.; Wu, Z.; Liu, J.; Xiao, L.; Wei, Z. Supervised spectral-spatial hyperspectral image classification with weighted markov random fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
- Bai, J.; Xiang, S.; Pan, C. A graph-based classification method for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 803–817. [Google Scholar] [CrossRef]
- Sun, X.; Qu, Q.; Nasrabadi, N.M.; Tran, T.D. Structured Priors for Sparse-Representation-Based Hyperspectral Image Classification. IEEE Geosci. Remote. Sens. Lett. 2014, 11, 1235–1239. [Google Scholar]
- Xu, Y.; Fang, F.; Zhang, G. Similarity-guided and lp-regularized sparse unmixing of hyperspectral data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2311–2315. [Google Scholar] [CrossRef]
- Liu, C.; Zhou, J.; Liang, J.; Qian, Y.; Li, H.; Gao, Y. Exploring structural consistency in graph regularized joint spectral-spatial sparse coding for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1151–1164. [Google Scholar] [CrossRef]
- Soltani-Farani, A.; Rabiee, H.R.; Hosseini, S.A. Spatial-aware dictionary learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 527–541. [Google Scholar] [CrossRef]
- Sumarsono, A.; Du, Q. Low-rank subspace representation for estimating the number of signal subspaces in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6286–6292. [Google Scholar] [CrossRef]
- Sun, W.; Yang, G.; Du, B.; Zhang, L.; Zhang, L. A sparse and low-rank near-isometric linear embedding method for feature extraction in hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4032–4046. [Google Scholar] [CrossRef]
- Qian, Y.; Ye, M.; Zhou, J. Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2276–2291. [Google Scholar] [CrossRef]
- Tsai, F.; Lai, J.S. Feature extraction of hyperspectral image cubes using three-dimensional gray-level cooccurrence. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3504–3513. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor discriminative locality alignment for hyperspectral image spectral–spatial feature extraction. IEEE Trans. Geosci. Remote Sens. 2013, 51, 242–256. [Google Scholar] [CrossRef]
- He, Z.; Liu, L. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification. ISPRS J. Photogramm. Remote Sens. 2016, 121, 11–27. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Ma, X.; Wang, H.; Wang, J. Semisupervised classification for hyperspectral image based on multi-decision labeling and deep feature learning. ISPRS J. Photogramm. Remote Sens. 2016, 120, 99–107. [Google Scholar] [CrossRef]
- Smith, S.M.; Brady, J.M. SUSAN—A new approach to low level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
- Paris, S.; Durand, F. A fast approximation of the bilateral filter using a signal processing approach. In Proceedings of the 9th European Conference on Computer Vision—ECCV, Graz, Austria, 7–13 May 2006; pp. 568–580. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training GANs. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2226–2234. [Google Scholar]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; Narosa Publishing House: Delhi, India, 1998; pp. 839–846. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 2015; arXiv:1511.06434. [Google Scholar]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv, 2014; arXiv:1411.1784. [Google Scholar]
- Denton, E.L.; Chintala, S.; Szlam, A.; Fergus, R. Deep generative image models using a Laplacian pyramid of adversarial networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015; pp. 1486–1494. [Google Scholar]
- Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Barcelona, Spain, 5–10 December 2016; pp. 2172–2180. [Google Scholar]
- Metz, L.; Poole, B.; Pfau, D.; Sohl-Dickstein, J. Unrolled generative adversarial networks. arXiv, 2016; arXiv:1611.02163. [Google Scholar]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein gan. arXiv, 2017; arXiv:1701.07875. [Google Scholar]
- Wang, X.; Gupta, A. Generative image modeling using style and structure adversarial networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 318–335. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. arXiv, 2016; arXiv:1609.04802. [Google Scholar]
- Yeh, R.; Chen, C.; Lim, T.Y.; Hasegawa-Johnson, M.; Do, M.N. Semantic image inpainting with perceptual and contextual losses. arXiv, 2016; arXiv:1607.07539. [Google Scholar]
- Springenberg, J.T. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv, 2015; arXiv:1511.06390. [Google Scholar]
- Premachandran, V.; Yuille, A.L. Unsupervised learning using generative adversarial training and clustering. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
- Sutskever, I.; Jozefowicz, R.; Gregor, K.; Rezende, D.; Lillicrap, T.; Vinyals, O. Towards principled unsupervised learning. arXiv, 2015; arXiv:1511.06440. [Google Scholar]
Class | Name | NoS | Class | Name | NoS |
---|---|---|---|---|---|
1 | alfalfa | 54 | 9 | oats | 20 |
2 | corn-no till | 1434 | 10 | soybean-no till | 968 |
3 | corn-min till | 834 | 11 | soybean-min till | 2468 |
4 | corn | 234 | 12 | soybean-clean till | 614 |
5 | grass/pasture | 497 | 13 | wheat | 212 |
6 | grass/trees | 747 | 14 | woods | 1294 |
7 | grass/pasture-mowed | 26 | 15 | bldg-grass-tree-drives | 380 |
8 | hay-windrowed | 489 | 16 | stone-steel towers | 95 |
Total | 10,366 |
Class | Name | NoS | Class | Name | NoS |
---|---|---|---|---|---|
1 | asphalt | 6631 | 6 | bare soil | 5029 |
2 | meadows | 18,649 | 7 | bitumen | 1330 |
3 | gravel | 2099 | 8 | bricks | 3682 |
4 | trees | 3064 | 9 | shadows | 947 |
5 | metal sheets | 1345 | Total | 42,776 |
Class | Name | NoS | Class | Name | NoS |
---|---|---|---|---|---|
1 | brocoli-green-weeds-1 | 2009 | 9 | soil-vinyard-develop | 6203 |
2 | brocoli-green-weeds-2 | 3726 | 10 | corn-senesced-green-weeds | 3278 |
3 | fallow | 1976 | 11 | lettuce-romaine-4wk | 1068 |
4 | fallow-rough-plow | 1394 | 12 | lettuce-romaine-5wk | 1927 |
5 | fallow-smooth | 2678 | 13 | lettuce-romaine-6wk | 916 |
6 | stubble | 3959 | 14 | lettuce-romaine-7wk | 1070 |
7 | celery | 3579 | 15 | vinyard-untrained | 7268 |
8 | grapes-untrained | 11,271 | 16 | vinyard-vertical-trellis | 1807 |
Total | 54,129 |
Class | Spec | 2DBF | 3DBF | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | |
1 a | 40.17 | 46.15 | 96.58 | 48.42 | 77.65 | 84.48 | 96.47 | 95.46 | 95.27 | 94.27 | 96.51 | 96.08 |
2 | 33.07 | 32.98 | 65.31 | 48.02 | 39.67 | 39.93 | 64.93 | 63.71 | 46.01 | 45.05 | 63.70 | 66.31 |
3 | 40.06 | 38.02 | 51.35 | 44.96 | 30.16 | 29.03 | 52.21 | 53.14 | 39.72 | 36.61 | 49.89 | 49.95 |
4 | 25.93 | 27.73 | 50.79 | 37.30 | 26.89 | 22.67 | 50.25 | 48.09 | 45.35 | 39.42 | 46.41 | 53.98 |
5 | 61.38 | 55.02 | 85.21 | 74.23 | 69.10 | 74.06 | 89.35 | 88.73 | 74.51 | 77.23 | 92.42 | 91.75 |
6 | 68.90 | 73.71 | 97.18 | 80.31 | 92.52 | 94.23 | 97.59 | 97.62 | 97.04 | 97.90 | 97.96 | 98.19 |
7 | 38.11 | 56.80 | 64.58 | 60.39 | 28.52 | 29.10 | 61.23 | 61.04 | 56.22 | 53.82 | 81.33 | 71.89 |
8 | 77.12 | 87.39 | 99.84 | 89.10 | 96.84 | 98.83 | 99.86 | 99.81 | 99.81 | 99.83 | 99.83 | 99.79 |
9 | 24.62 | 20.01 | 81.08 | 39.60 | 25.03 | 26.61 | 81.20 | 79.68 | 73.88 | 73.93 | 80.34 | 79.03 |
10 | 46.77 | 43.29 | 57.60 | 57.66 | 34.88 | 34.35 | 57.81 | 60.08 | 45.63 | 46.59 | 56.66 | 58.37 |
11 | 48.73 | 50.98 | 66.44 | 55.36 | 41.81 | 53.06 | 66.17 | 67.86 | 52.04 | 60.77 | 69.22 | 71.61 |
12 | 26.28 | 28.31 | 61.31 | 36.92 | 37.96 | 38.06 | 60.56 | 57.50 | 46.19 | 51.53 | 64.85 | 64.34 |
13 | 85.68 | 83.23 | 99.23 | 88.16 | 96.76 | 96.99 | 99.15 | 98.90 | 98.91 | 99.27 | 99.60 | 99.41 |
14 | 76.75 | 78.87 | 89.22 | 82.70 | 85.81 | 88.12 | 91.57 | 90.95 | 85.47 | 88.30 | 93.18 | 95.11 |
15 | 30.73 | 19.03 | 81.53 | 38.46 | 60.38 | 62.74 | 83.55 | 82.73 | 68.39 | 69.76 | 82.06 | 85.36 |
16 | 73.24 | 75.93 | 83.13 | 87.75 | 95.58 | 97.26 | 82.94 | 83.16 | 66.14 | 69.46 | 87.97 | 84.49 |
OA | 49.60 | 51.75 | 72.88 | 59.09 | 53.88 | 56.51 | 73.28 | 73.53 | 62.56 | 65.33 | 74.12 | 75.62 |
AA | 60.93 | 59.62 | 79.29 | 70.12 | 68.85 | 70.04 | 79.99 | 79.48 | 70.64 | 70.92 | 80.47 | 81.05 |
43.84 | 45.70 | 69.18 | 54.36 | 48.75 | 51.44 | 69.71 | 69.89 | 57.42 | 60.09 | 70.59 | 72.23 | |
F-Measure | 49.85 | 51.09 | 76.90 | 60.58 | 58.72 | 60.60 | 77.18 | 76.78 | 68.16 | 68.98 | 78.87 | 79.10 |
Class | Spec | 2DBF | 3DBF | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | |
1 a | 71.43 | 77.56 | 79.24 | 73.88 | 71.35 | 72.65 | 79.44 | 79.00 | 61.97 | 76.91 | 80.30 | 81.21 |
2 | 58.68 | 59.85 | 78.14 | 69.26 | 66.55 | 66.76 | 78.96 | 79.95 | 59.44 | 75.99 | 81.83 | 84.45 |
3 | 39.99 | 20.16 | 52.27 | 42.49 | 36.52 | 48.97 | 51.24 | 53.92 | 46.36 | 49.88 | 60.12 | 58.56 |
4 | 48.83 | 62.14 | 65.65 | 67.27 | 60.17 | 57.39 | 68.42 | 71.53 | 79.67 | 70.29 | 79.52 | 84.57 |
5 | 53.20 | 89.41 | 96.23 | 93.09 | 94.85 | 82.61 | 96.11 | 96.45 | 95.76 | 96.64 | 97.10 | 97.29 |
6 | 32.20 | 36.39 | 56.88 | 38.01 | 44.92 | 47.43 | 56.64 | 58.43 | 60.71 | 52.91 | 60.21 | 62.60 |
7 | 63.78 | 51.27 | 52.65 | 54.75 | 46.10 | 47.63 | 53.33 | 53.44 | 76.52 | 51.07 | 56.45 | 59.25 |
8 | 57.80 | 64.92 | 67.92 | 64.05 | 60.78 | 63.34 | 68.28 | 68.38 | 60.60 | 66.01 | 71.60 | 71.54 |
9 | 95.61 | 99.92 | 95.91 | 99.90 | 94.55 | 93.84 | 95.44 | 95.63 | 96.98 | 94.52 | 96.31 | 96.28 |
OA | 53.62 | 60.17 | 71.83 | 63.66 | 61.80 | 62.62 | 72.32 | 73.29 | 63.39 | 69.87 | 75.78 | 77.94 |
AA | 63.76 | 69.53 | 76.08 | 72.85 | 70.21 | 70.66 | 76.37 | 77.33 | 70.89 | 75.43 | 80.43 | 81.36 |
44.18 | 51.33 | 64.24 | 54.62 | 52.81 | 53.81 | 64.85 | 66.04 | 54.47 | 62.07 | 69.26 | 71.82 | |
F-Measure | 57.95 | 62.40 | 71.65 | 66.97 | 63.98 | 64.51 | 71.98 | 72.97 | 64.82 | 70.47 | 75.94 | 77.30 |
Class | Spec | 2DBF | 3DBF | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | SVM | LapSVM | CDL-MD-L | GANs | |
1 a | 82.66 | 87.42 | 97.35 | 92.06 | 82.37 | 93.65 | 97.81 | 97.89 | 94.24 | 93.82 | 97.75 | 98.18 |
2 | 86.98 | 92.44 | 98.01 | 92.98 | 88.92 | 94.71 | 98.25 | 98.30 | 95.93 | 95.70 | 98.37 | 98.63 |
3 | 46.08 | 32.97 | 89.44 | 66.85 | 46.42 | 65.43 | 89.60 | 90.14 | 67.57 | 68.24 | 90.88 | 92.86 |
4 | 96.56 | 96.00 | 95.04 | 96.47 | 88.05 | 93.57 | 95.95 | 96.05 | 95.04 | 95.10 | 96.18 | 94.63 |
5 | 87.68 | 84.84 | 93.79 | 90.16 | 81.58 | 87.39 | 94.35 | 94.48 | 88.76 | 89.75 | 95.52 | 94.10 |
6 | 98.88 | 94.60 | 98.08 | 98.72 | 98.84 | 97.25 | 98.19 | 98.91 | 96.97 | 97.04 | 98.94 | 99.64 |
7 | 91.97 | 93.90 | 97.20 | 93.71 | 93.05 | 93.39 | 97.19 | 97.17 | 93.43 | 93.46 | 97.22 | 98.18 |
8 | 56.12 | 59.05 | 66.00 | 58.39 | 68.87 | 58.12 | 69.10 | 68.05 | 59.30 | 61.64 | 73.27 | 76.40 |
9 | 93.92 | 96.45 | 98.04 | 96.49 | 97.91 | 97.02 | 98.20 | 98.20 | 97.46 | 97.49 | 98.33 | 98.45 |
10 | 52.58 | 69.94 | 76.58 | 71.22 | 43.17 | 65.56 | 77.37 | 79.16 | 65.01 | 64.47 | 81.58 | 83.23 |
11 | 61.79 | 71.97 | 83.86 | 73.16 | 55.93 | 74.06 | 84.04 | 86.07 | 74.48 | 74.72 | 84.05 | 88.22 |
12 | 72.09 | 75.79 | 97.52 | 83.88 | 72.09 | 82.98 | 97.61 | 97.70 | 83.76 | 82.06 | 96.02 | 98.18 |
13 | 72.10 | 77.04 | 84.87 | 78.53 | 75.46 | 78.43 | 88.99 | 89.04 | 79.28 | 79.97 | 85.15 | 89.56 |
14 | 79.22 | 81.93 | 81.64 | 79.74 | 74.10 | 80.11 | 82.54 | 81.24 | 81.75 | 80.82 | 82.18 | 85.53 |
15 | 55.90 | 56.25 | 56.73 | 51.64 | 59.81 | 50.32 | 55.05 | 61.31 | 55.52 | 54.94 | 56.34 | 67.11 |
16 | 62.32 | 76.79 | 88.19 | 73.46 | 78.34 | 71.90 | 87.53 | 91.80 | 69.92 | 71.45 | 92.57 | 94.58 |
OA | 73.22 | 74.23 | 82.72 | 77.17 | 75.15 | 76.47 | 83.53 | 84.38 | 77.78 | 78.12 | 85.11 | 87.63 |
AA | 77.92 | 78.89 | 89.48 | 83.18 | 77.45 | 82.75 | 89.88 | 90.71 | 83.57 | 83.90 | 90.61 | 92.30 |
70.40 | 71.47 | 80.84 | 74.70 | 72.44 | 73.93 | 81.71 | 82.69 | 75.39 | 75.77 | 83.44 | 86.26 | |
F-Measure | 74.80 | 77.96 | 87.65 | 81.09 | 75.31 | 80.24 | 88.24 | 89.09 | 81.15 | 81.29 | 89.02 | 91.09 |
Methods | Z | Z | Z |
---|---|---|---|
(Indian Pines Data) | (University of Pavia Data) | (Salinas Data) | |
3DBF-GANs vs. Spec-SVM | 28.04 | 42.21 | 40.21 |
3DBF-GANs vs. Spec-LapSVM | 26.93 | 41.45 | 38.54 |
3DBF-GANs vs. Spec-CDL-MD-L | 5.91 | 18.52 | 17.24 |
3DBF-GANs vs. Spec-GANs | 18.62 | 30.72 | 18.38 |
3DBF-GANs vs. 2DBF-SVM | 25.01 | 41.32 | 29.31 |
3DBF-GANs vs. 2DBF-LapSVM | 19.63 | 39.79 | 24.52 |
3DBF-GANs vs. 2DBF-CDL-MD-L | 4.72 | 17.92 | 20.11 |
3DBF-GANs vs. 2DBF-GANs | 4.45 | 17.63 | 19.54 |
3DBF-GANs vs. 3DBF-SVM | 16.71 | 38.22 | 18.91 |
3DBF-GANs vs. 3DBF-LapSVM | 15.57 | 25.07 | 18.57 |
3DBF-GANs vs. 3DBF-CDL-MD-L | 1.64 | 10.37 | 16.81 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1042. https://doi.org/10.3390/rs9101042
He Z, Liu H, Wang Y, Hu J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sensing. 2017; 9(10):1042. https://doi.org/10.3390/rs9101042
Chicago/Turabian StyleHe, Zhi, Han Liu, Yiwen Wang, and Jie Hu. 2017. "Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification" Remote Sensing 9, no. 10: 1042. https://doi.org/10.3390/rs9101042
APA StyleHe, Z., Liu, H., Wang, Y., & Hu, J. (2017). Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sensing, 9(10), 1042. https://doi.org/10.3390/rs9101042