Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification
<p>The block diagram of the proposed classification framework. The core part is the superpixel-guided layer-wise embedding CNN. To reduce the demand for training samples, unlabeled samples are also used for fine tuning the designed layer-wise embedding CNN. Considering the irregular spatial dependency of remote sensing images, superpixels are introduced to guide the selection of more valuable unlabeled samples (superpixel-guided patches in the figure) since superpixels are adaptive to real scenes of remote sensing images.</p> "> Figure 2
<p>The procedure of superpixel-based random sampling. Superpixels are introduced to guide the selection of unlabeled samples to handle the irregular spatial dependency of remote sensing images. Under the strategy, more representative and informative samples come from pixels located on superpixel boundaries, since they are more likely to be close to the class boundaries and easier to be misclassified.</p> "> Figure 3
<p>The structure of the superpixel-guided layer-wise embedding CNN. The layer-wise embedding CNN consists of two encoders (the clean one, the green block on the top of the figure and the noisy one, the blue block at the bottom) and one decoder (the yellow block in the middle). The objective cost function <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>O</mi> <mi>S</mi> <mi>T</mi> </mrow> </semantics></math> for fine tuning comes from supervised cross entropy (<math display="inline"><semantics> <mrow> <mi>C</mi> <mi>O</mi> <mi>S</mi> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>) and unsupervised reconstruction cost (<math display="inline"><semantics> <mrow> <mi>C</mi> <mi>O</mi> <mi>S</mi> <mi>T</mi> <msup> <mn>2</mn> <mrow> <mo stretchy="false">(</mo> <mi>l</mi> <mo stretchy="false">)</mo> </mrow> </msup> </mrow> </semantics></math>). The size of the input patch x is set to be <math display="inline"><semantics> <mrow> <mn>13</mn> <mo>×</mo> <mn>13</mn> </mrow> </semantics></math>, which represents the neighborhood centered on the objective pixel to be classified. Superpixel-guided unlabeled patches, obtained from the superpixel-based random sampling strategy (purple block in the left), are the main input training samples for the layer-wise embedding CNN with black arrows and responsible for <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>O</mi> <mi>S</mi> <mi>T</mi> <msup> <mn>2</mn> <mrow> <mo stretchy="false">(</mo> <mi>l</mi> <mo stretchy="false">)</mo> </mrow> </msup> </mrow> </semantics></math>. While, labeled patches are input to the noisy encoder in the direction of the orange arrow, which are for the calculation of <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>O</mi> <mi>S</mi> <mi>T</mi> <mn>1</mn> </mrow> </semantics></math>. To obtain the classification maps of remote sensing images, each patch is input to the clean encoder of the fine-tuned layer-wise embedding CNN at the test stage to output a clean class label.</p> "> Figure 4
<p>The ROSIS Pavia University hyperspectral image. (<b>a</b>) false color image, (<b>b</b>) ground truth map.</p> "> Figure 5
<p>The Landsat 5 TM multispectral image. (<b>a</b>) false color image, (<b>b</b>) ground truth map.</p> "> Figure 6
<p>The EMISAR image. (<b>a</b>) false color image, (<b>b</b>) ground truth map.</p> "> Figure 7
<p>OA as a function of the number of iterations in the fine-tuning process of SLE-CNN for (<b>a</b>) HSI data, (<b>b</b>) MSI data, (<b>c</b>) SAR data.</p> "> Figure 8
<p>Ground truth map and classification maps of the ROSIS Pavia University hyperspectral image. (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) Laplacian SVM, (<b>d</b>) SL SVM, (<b>e</b>) SL-Gabor SVM, (<b>f</b>) CNN-AE, (<b>g</b>) Supervised, (<b>h</b>) SLE-CNN.</p> "> Figure 9
<p>Ground truth map and classification maps of the Landsat 5 TM multispectral image with (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) Laplacian SVM, (<b>d</b>) SL SVM, (<b>e</b>) SL-Gabor SVM, (<b>f</b>) CNN-AE, (<b>g</b>) Supervised, (<b>h</b>) SLE-CNN.</p> "> Figure 10
<p>Ground truth map and classification maps of the EMISAR image with (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) Laplacian SVM, (<b>d</b>) SL SVM, (<b>e</b>) SL-Gabor SVM, (<b>f</b>) CNN-AE, (<b>g</b>) Supervised, (<b>h</b>) SLE-CNN.</p> "> Figure 11
<p>Classification accuracies of the designed layer-wise embedding CNN with totally random sampling strategy and superpixel-based random sampling strategy in (<b>a</b>) HSI data, (<b>b</b>) MSI data, (<b>c</b>) SAR data.</p> "> Figure 12
<p>The impact of the number of labeled training samples per class on the OA in (<b>a</b>) HSI data, (<b>b</b>) MSI data, (<b>c</b>) SAR data.</p> ">
Abstract
:1. Introduction
- Considering the fact that remote sensing images are characteristic of irregular spatial dependency, we introduce the superpixel sampling strategy to guide the use of unlabeled samples, which can achieve an automatic determination of the neighborhood covering for a spatial dependency system and thus adapting to real scenes of remote sensing images. With the aid of these highly representative and informative unlabeled samples, the training process will be boosted, leading to better classification results.
- Deep CNNs are efficient classifiers, but it requires many labeled samples for training which conflicts with the reality that only limited labeled samples are available. To reduce the demand for labeled samples, we develop an SLE-CNN, which can take advantage of many unlabeled samples with the guide of superpixel-based random sampling. It regards CNN as the encoder part of the AE model and appends a reconstruction loss at each layer of the network that plays a role of extra supervision, thus combining the strong generalization capacity of deep CNN model with the detail preserving ability of AE.
- To demonstrate the performance of our framework for classification tasks of different types of remote sensing data, we conducted experiments to provide the latest results on benchmark problems. In addition, we compared our framework with several typical semi-supervised and supervised methods, which also verifies the effectiveness of our proposed framework.
2. Methodology
2.1. Superpixel-Based Random Sampling
- Images are segmented to superpixels, and all pixels in images are recorded as set A;
- Randomly choose a part of set A as set B;
- Pixels located on the boundaries of superpixels are detected and recorded as set C;
- For each pixel in set B, if its spatial distance to any pixel of the same superpixel in set C is less than or equal to k (pixel unit), put it in the candidate list.
2.2. Autoencoder
2.3. Superpixel-Guided Layer-Wise Embedding CNN
2.3.1. General Steps for Constructing Layer-Wise Embedding CNN
2.3.2. CNN Based Encoder for Supervised Learning
2.3.3. Vertical Connection and Vanilla Combinator-Based Denoising Function for Unsupervised Learning
2.3.4. Overall Objective Function Formulation
Algorithm 1 Calculation of output class labels and objective function of the layer-wise embedding CNN |
Require:
|
3. Experimental Setup
3.1. Dataset Description
3.2. Experiments
3.3. Experimental Results and Discussions
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Young, O.R.; Onoda, M. Satellite Earth Observations in Environmental Problem-Solving. In Satellite Earth Observations and Their Impact on Society and Policy; Onoda, M., Young, O.R., Eds.; Springer: Singapore, 2017; pp. 3–27. [Google Scholar] [Green Version]
- Sun, B.; Kang, X.; Li, S.; Benediktsson, J.A. Random-walker-based Collaborative Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 212–222. [Google Scholar] [CrossRef]
- Yang, L.; Wang, M.; Yang, S.; Zhang, R.; Zhang, P. Sparse Spatio-Spectral LapSVM With Semisupervised Kernel Propagation for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2046–2054. [Google Scholar] [CrossRef]
- Sharma, A.; Liu, X.; Yang, X.; Shi, D. A Patch-based Convolutional Neural Network for Remote Sensing Image Classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef] [PubMed]
- Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent Advances in Techniques for Hyperspectral Image Processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
- Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
- He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral-spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1579–1597. [Google Scholar] [CrossRef]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef]
- Zhong, Y.; Ma, A.; Zhang, L. An Adaptive Memetic Fuzzy Clustering Algorithm With Spatial Information for Remote Sensing Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1235–1248. [Google Scholar] [CrossRef]
- Niazmardi, S.; Homayouni, S.; Safari, A. An Improved FCM Algorithm Based on the SVDD for Unsupervised Hyperspectral Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 831–839. [Google Scholar] [CrossRef]
- Ma, X.; Wang, H.; Wang, J. Semisupervised Classification for Hyperspectral Image based on Multi-decision Labeling and Deep Feature Learning. ISPRS J. Photogramm. Remote Sens. 2016, 120, 99–107. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi–Supervised Graph-Based Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
- Gao, L.; Li, J.; Khodadadzadeh, M.; Plaza, A.; Zhang, B.; He, Z.; Yan, H. Subspace-Based Support Vector Machines for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2014, 12, 349–353. [Google Scholar]
- Kuo, B.C.; Ho, H.H.; Li, C.H.; Hung, C.C.; Taur, J.S. A Kernel-Based Feature Selection Method for SVM With RBF Kernel for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 317–326. [Google Scholar]
- Böhning, D. Multinomial Logistic Regression Algorithm. Ann. Inst. Stat. Math. 1992, 44, 197–200. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Spectral–spatial Hyperspectral Image Segmentation Using Subspace Multinomial Logistic Regression and Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2012, 50, 809–823. [Google Scholar] [CrossRef]
- Adep, R.N.; Shetty, A.; Ramesh, H. EXhype: A Tool for Mineral Classification Using Hyperspectral Data. ISPRS J. Photogramm. Remote Sens. 2017, 124, 106–118. [Google Scholar] [CrossRef]
- Yang, H. A back-propagation neural network for mineralogical mapping from AVIRIS data. Int. J. Remote Sens. 1999, 20, 97–110. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. Active and Semisupervised Learning for the Classification of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
- Shao, Z.; Zhang, L.; Zhou, X.; Ding, L. A Novel Hierarchical Semisupervised SVM for Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1609–1613. [Google Scholar] [CrossRef]
- Naeini, A.A.; Homayouni, S.; Saadatseresht, M. Improving the Dynamic Clustering of Hyperspectral Data Based on the Integration of Swarm Optimization and Decision Analysis. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2161–2173. [Google Scholar] [CrossRef]
- Dópido, I.; Li, J.; Marpu, P.R.; Plaza, A.; Dias, J.M.B.; Benediktsson, J.A. Semisupervised Self-Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4032–4044. [Google Scholar] [CrossRef] [Green Version]
- Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
- Chang, C.I. Hyperspectral Data Exploitation: Theory and Applications; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
- Chi, M.; Feng, R.; Bruzzone, L. Classification of Hyperspectral Remote-sensing Data with Primal SVM for Small-sized Training Dataset Problem. Adv. Space Res. 2008, 41, 1793–1799. [Google Scholar] [CrossRef]
- Chapelle, O.; Schölkopf, B.; Zien, A. Semi-Supervised Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
- Sakai, T.; Plessis, M.C.D.; Niu, G.; Sugiyama, M. Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data. arXiv, 2016; arXiv:1605.06955. [Google Scholar]
- Chapel, L.; Burger, T.; Courty, N.; Lefevre, S. PerTurbo Manifold Learning Algorithm for Weakly Labeled Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1070–1078. [Google Scholar] [CrossRef]
- Li, J.; Bioucas-Dias, J.M.; Plaza, A. Semisupervised Hyperspectral Image Classification Using Soft Sparse Multinomial Logistic Regression. IEEE Geosci. Remote Sens. Lett. 2013, 10, 318–322. [Google Scholar]
- Jin, G.; Raich, R.; Miller, D.J. A Generative Semi-supervised Model for Multi-view Learning when Some Views are Label-free. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 3302–3306. [Google Scholar]
- Rosenberg, C.; Hebert, M.; Schneiderman, H. Semi-supervised Self-Training of Object Detection Models. In Proceedings of the 7th IEEE Workshops on Applications of Computer Vision (WACV/MOTION’05), Breckenridge, CO, USA, 5–7 January 2005; Volume 1, pp. 29–36. [Google Scholar]
- Aydemir, M.S.; Bilgin, G. Semisupervised Hyperspectral Image Classification Using Small Sample Sizes. IEEE Geosci. Remote Sens. Lett. 2017, 14, 621–625. [Google Scholar] [CrossRef]
- Zhang, X.; Song, Q.; Liu, R.; Wang, W.; Jiao, L. Modified Co-Training With Spectral and Spatial Views for Semisupervised Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2044–2055. [Google Scholar] [CrossRef]
- Tan, K.; Hu, J.; Li, J.; Du, P. A Novel Semi-supervised Hyperspectral Image Classification Approach based on Spatial Neighborhood Information and Classifier Combination. ISPRS J. Photogramm. Remote Sens. 2015, 105, 19–29. [Google Scholar] [CrossRef]
- Romaszewski, M.; Głomb, P.; Cholewa, M. Semi-supervised hyperspectral classification from a small number of training samples using a co-training approach. ISPRS J. Photogramm. Remote Sens. 2016, 121, 60–76. [Google Scholar] [CrossRef]
- Joachims, T. Transductive Inference for Text Classification Using Support Vector Machines. In Proceedings of the 16th International Conference on Machine Learning, ICML ’99, Bled, Slovenia, 27–30 June 1999; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; pp. 200–209. [Google Scholar]
- Wang, L.; Hao, S.; Wang, Q.; Wang, Y. Semi-supervised classification for hyperspectral imagery based on spatial-spectral Label Propagation. ISPRS J. Photogramm. Remote Sens. 2014, 97, 123–137. [Google Scholar] [CrossRef]
- Maulik, U.; Chakraborty, D. Learning with transductive SVM for semisupervised pixel classification of remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2013, 77, 66–78. [Google Scholar] [CrossRef]
- Im, D.J.; Taylor, G.W. Semisupervised Hyperspectral Image Classification via Neighborhood Graph Learning. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1913–1917. [Google Scholar] [CrossRef]
- Morsier, F.D.; Borgeaud, M.; Gass, V.; Thiran, J.P.; Tuia, D. Kernel Low-Rank and Sparse Graph for Unsupervised and Semi–Supervised Classification of Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3410–3420. [Google Scholar] [CrossRef]
- Zhu, X.; Ghahramani, Z.; Lafferty, J. Semi-supervised Learning Using Gaussian Fields and Harmonic Functions. In Proceedings of the 20th International Conference on Machine Learning, ICML’03, Washington, DC, USA, 21–24 August 2003; AAAI Press: Palo Alto, CA, USA, 2003; pp. 912–919. [Google Scholar]
- Blum, A.; Lafferty, J.; Rwebangira, M.R.; Reddy, R. Semi-supervised Learning Using Randomized Mincuts. In Proceedings of the 21st International Conference on Machine Learning, ICML ’04, Banff, AB, Canada, 4–8 July 2004; ACM: New York, NY, USA, 2004; p. 13. [Google Scholar]
- Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M. Semi-supervised Learning with Deep Generative Models. In Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS’14, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; Volume 2, pp. 3581–3589. [Google Scholar]
- Melacci, S.; Belkin, M. Laplacian Support Vector Machines Trained in the Primal. J. Mach. Learn. Res. 2011, 12, 1149–1184. [Google Scholar]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Srivastava, N.; Hinton, G.E.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 2234–2242. [Google Scholar]
- Rasmus, A.; Berglund, M.; Honkala, M.; Valpola, H.; Raiko, T. Semi-supervised Learning with Ladder Networks. Adv. Neural Inf. Process. Syst. 2015, 3546–3554. [Google Scholar]
- Liu, C.; He, L.; Li, Z.; Li, J. Feature-Driven Active Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 341–354. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels; EPFL Technical Report no. 149300; EPFL: Lausanne, Switzerland, 2010. [Google Scholar]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Van den Bergh, M.; Boix, X.; Roig, G.; Van Gool, L. SEEDS: Superpixels Extracted Via Energy-Driven Sampling. Int. J. Comput. Vis. 2015, 111, 298–314. [Google Scholar] [CrossRef]
- Liou, C.Y.; Huang, J.C.; Yang, W.C. Modeling Word Perception Using the Elman Network. Neural Comput. 2008, 71, 3150–3157. [Google Scholar] [CrossRef]
- Liou, C.Y.; Cheng, W.C.; Liou, J.W.; Liou, D.R. Autoencoder for Words. Neural Comput. 2014, 139, 84–96. [Google Scholar] [CrossRef]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Wu, F.; Wang, Z.; Zhang, Z.; Yang, Y.; Luo, J.; Zhu, W.; Zhuang, Y. Weakly Semi-Supervised Deep Learning for Multi-Label Image Annotation. IEEE Trans. Big Data 2015, 1, 109–122. [Google Scholar] [CrossRef]
- Gal, Y.; Ghahramani, Z. Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the 33rd International Conference on Machine Learning, ICML’16, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 1050–1059. [Google Scholar]
- Pezeshki, M.; Fan, L.; Brakel, P.; Courville, A.; Bengio, Y. Deconstructing the Ladder Network Architecture. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Balcan, M.F., Weinberger, K.Q., Eds.; PMLR: New York, NY, USA, 2016; Volume 48, pp. 2368–2376. [Google Scholar]
- Belkin, M.; Niyogi, P.; Sindhwani, V. Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples. J. Mach. Learn. Res. 2006, 7, 2399–2434. [Google Scholar]
- Gu, Y.; Feng, K. Optimized Laplacian SVM With Distance Metric Learning for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 1109–1117. [Google Scholar] [CrossRef]
- Gómez-Chova, L.; Camps-Valls, G.; Munoz-Mari, J.; Calpe, J. Semisupervised Image Classification With Laplacian Support Vector Machines. IEEE Geosci. Remote Sens. Lett. 2008, 5, 336–340. [Google Scholar] [CrossRef]
- Luo, T.; Kramer, K.; Goldgof, D.B.; Hall, L.O.; Samson, S.; Remsen, A.; Hopkins, T. Active Larning to Recognize Multiple Types of Plankton. J. Mach. Learn. Res. 2005, 6, 589–613. [Google Scholar]
- Zhang, D.; Wong, A.; Indrawan, M.; Lu, G. Content-based Image Retrieval Using Gabor Texture Features. In Proceedings of the 1st IEEE PacificRim Conference on Multimedia, Sydney, Australia, 13–15 December 2000; pp. 392–395. [Google Scholar]
- Othman, E.; Bazi, Y.; Alajlan, N.; Alhichri, H.; Melgani, F. Using Convolutional Features and a Sparse Autoencoder for Land-use Scene Classification. Int. J. Remote Sens. 2016, 37, 2149–2167. [Google Scholar] [CrossRef]
- Appice, A.; Guccione, P. Exploiting Spatial Correlation of Spectral Signature for Training Data Selection in Hyperspectral Image Classification. In Discovery Science; Calders, T., Ceci, M., Malerba, D., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 295–309. [Google Scholar]
- Story, M.; Congalton, R.G. Accuracy Assessment: A User’s Perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
- Fung, T.; LeDrew, E. The Determination of Optimal Threshold Levels for Change Detection Using Various Accuracy Indices. Photogramm. Eng. Remote Sens. 1988, 54, 1449–1454. [Google Scholar]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Chinchor, N. MUC-4 Evaluation Metrics. In Proceedings of the 4th Conference on Message Understanding, MUC4 ’92, McLean, VA, USA, 16–18 June 1992; Association for Computational Linguistics: Stroudsburg, PA, USA, 1992; pp. 22–29. [Google Scholar]
Class | Name | NoS | Class | Name | NoS |
---|---|---|---|---|---|
1 | asphalt | 6631 | 6 | bare soil | 5029 |
2 | meadows | 18,649 | 7 | bitumen | 1330 |
3 | gravel | 2099 | 8 | bricks | 3682 |
4 | trees | 3064 | 9 | shadows | 947 |
5 | metal sheets | 1345 | Total | 42,776 |
Class | Name | NoS |
---|---|---|
1 | cultures | 62,322 |
2 | dams | 464 |
3 | bushes | 47,985 |
4 | grass lands | 59,883 |
5 | reforesting | 85,189 |
6 | roads | 2937 |
Total | 258,780 |
Class | Name | NoS |
---|---|---|
1 | winter wheat | 1088 |
2 | coniferous | 12,851 |
3 | water | 6695 |
4 | oat | 1172 |
5 | rye | 1767 |
Total | 23,573 |
Setting | Encoder/Decoder Layer | |||
---|---|---|---|---|
1st | 2nd | 3rd | 4th | |
Filter size | ||||
Number of filters | 3 | 6 | 6 | 9 |
Pooling stride | [2 2] | [2 2] | [2 2] | [2 2] |
Padding type | SAME | SAME | SAME | SAME |
Activation function | ReLU | ReLU | ReLU | ReLU |
Denoising cost | 100 | 10 | 1 | 0.1 |
Class | SVM | LapSVM | SL SVM | SL-Gabor SVM | CNN-AE | Supervised | SLE-CNN |
---|---|---|---|---|---|---|---|
asphalt a | 74.64 ± 0.15 | 78.24 ± 0.21 | 78.37 ± 0.12 | 80.23 ± 0.21 | 77.95 ± 0.13 | 71.36 ± 0.15 | 80.20 ± 0.12 |
meadows | 61.69 ± 0.24 | 65.18 ± 0.14 | 63.14 ± 0.15 | 62.54 ± 0.30 | 77.84 ± 0.15 | 66.55 ± 0.27 | 79.16 ± 0.14 |
gravel | 42.12 ± 0.32 | 40.16 ± 0.32 | 48.96 ± 0.37 | 50.91 ± 0.45 | 58.21 ± 0.32 | 37.59 ± 0.42 | 60.33 ± 0.22 |
trees | 51.33 ± 0.33 | 64.15 ± 0.10 | 56.82 ± 0.23 | 64.25 ± 0.24 | 68.43 ± 0.18 | 62.71 ± 0.33 | 84.56 ± 0.11 |
metal sheets | 56.31 ± 0.16 | 92.76 ± 0.08 | 77.57 ± 0.12 | 85.18 ± 0.13 | 95.15 ± 0.04 | 95.67 ± 0.02 | 97.11 ± 0.08 |
bare soil | 34.85 ± 0.29 | 40.39 ± 0.27 | 39.10 ± 0.38 | 41.36 ± 0.37 | 55.63 ± 0.27 | 57.83 ± 0.27 | 60.57 ± 0.25 |
bitumen | 66.92 ± 0.37 | 54.28 ± 0.16 | 60.29 ± 0.24 | 62.57 ± 0.23 | 56.44 ± 0.33 | 50.22 ± 0.26 | 55.89 ± 0.32 |
bricks | 61.77 ± 0.12 | 66.92 ± 0.15 | 64.04 ± 0.22 | 63.60 ± 0.21 | 67.29 ± 0.16 | 63.64 ± 0.21 | 70.58 ± 0.17 |
shadows | 98.48 ± 0.04 | 98.33 ± 0.05 | 99.63 ± 0.06 | 98.53 ± 0.05 | 96.45 ± 0.02 | 99.45 ± 0.04 | 97.31 ± 0.03 |
OA | 60.62 ± 0.29 | 65.33 ± 0.27 | 63.29 ± 0.29 | 66.55 ± 0.31 | 72.36 ± 0.14 | 66.56 ± 0.29 | 75.49 ± 0.20 |
AA | 65.08 ± 0.20 | 69.97 ± 0.22 | 69.39 ± 0.27 | 70.93 ± 0.18 | 74.44 ± 0.15 | 70.09 ± 0.28 | 76.99 ± 0.19 |
Kappa | 49.83 ± 0.18 | 57.08 ± 0.19 | 54.66 ± 0.28 | 59.44 ± 0.30 | 66.30 ± 0.20 | 65.80 ± 0.23 | 68.83 ± 0.26 |
F-Measure | 60.90 ± 0.24 | 66.71 ± 0.27 | 65.32 ± 0.19 | 67.69 ± 0.23 | 72.60 ± 0.17 | 67.22 ± 0.15 | 76.19 ± 0.16 |
Class | SVM | LapSVM | SL SVM | SL-Gabor SVM | CNN-AE | Supervised | SLE-CNN |
---|---|---|---|---|---|---|---|
cultures a | 48.86 ± 0.32 | 46.89 ± 0.26 | 45.37 ± 0.28 | 42.44 ± 0.34 | 69.31 ± 0.25 | 53.65 ± 0.32 | 71.98 ± 0.15 |
dams | 30.56 ± 0.45 | 33.90 ± 0.41 | 34.52 ± 0.42 | 36.66 ± 0.42 | 41.02 ± 0.37 | 38.27 ± 0.55 | 46.51 ± 0.40 |
bushes | 67.23 ± 0.26 | 70.31 ± 0.19 | 68.51 ± 0.17 | 70.53 ± 0.27 | 52.36 ± 0.34 | 48.55 ± 0.30 | 50.17 ± 0.37 |
grass lands | 42.18 ± 0.39 | 46.11 ± 0.45 | 44.72 ± 0.34 | 51.29 ± 0.32 | 45.34 ± 0.41 | 43.16 ± 0.44 | 45.95 ± 0.45 |
reforesting | 80.35 ± 0.13 | 84.09 ± 0.08 | 82.94 ± 0.16 | 86.10 ± 0.15 | 84.89 ± 0.16 | 85.48 ± 0.18 | 86.39 ± 0.18 |
roads | 58.25 ± 0.32 | 63.47 ± 0.32 | 59.45 ± 0.35 | 65.28 ± 0.25 | 66.53 ± 0.28 | 63.99 ± 0.26 | 67.42 ± 0.28 |
OA | 53.20 ± 0.28 | 58.58 ± 0.34 | 56.26 ± 0.29 | 58.76 ± 0.26 | 59.08 ± 0.30 | 55.42 ± 0.32 | 60.96 ± 0.20 |
AA | 50.92 ± 0.42 | 57.92 ± 0.23 | 56.27 ± 0.23 | 57.94 ± 0.34 | 60.33 ± 0.29 | 61.29 ± 0.22 | 62.67 ± 0.29 |
Kappa | 45.30 ± 0.30 | 46.37 ± 0.33 | 46.56 ± 0.31 | 47.60 ± 0.35 | 48.84 ± 0.31 | 46.26 ± 0.44 | 50.47 ± 0.36 |
F-Measure | 54.57 ± 0.31 | 57.46 ± 0.29 | 55.92 ± 0.31 | 58.72 ± 0.29 | 59.91 ± 0.27 | 55.52 ± 0.34 | 61.40 ± 0.26 |
Class | SVM | LapSVM | SL SVM | SL-Gabor SVM | CNN-AE | Supervised | SLE-CNN |
---|---|---|---|---|---|---|---|
winter wheat a | 48.52 ± 0.43 | 57.11 ± 0.31 | 54.17 ± 0.40 | 61.26 ± 0.34 | 94.47 ± 0.09 | 84.73 ± 0.15 | 93.16 ± 0.05 |
coniferous | 96.31 ± 0.02 | 97.48 ± 0.05 | 98.30 ± 0.02 | 98.41 ± 0.06 | 98.24 ± 0.05 | 98.95 ± 0.03 | 98.97 ± 0.04 |
water | 80.47 ± 0.11 | 88.14 ± 0.16 | 87.26 ± 0.16 | 89.18 ± 0.11 | 90.43 ± 0.02 | 92.15 ± 0.08 | 91.49 ± 0.08 |
oat | 69.50 ± 0.25 | 83.37 ± 0.13 | 80.19 ± 0.18 | 79.67 ± 0.16 | 91.15 ± 0.11 | 80.49 ± 0.16 | 94.38 ± 0.05 |
rye | 72.93 ± 0.16 | 86.32 ± 0.18 | 84.58 ± 0.13 | 87.52 ± 0.19 | 92.68 ± 0.08 | 89.17 ± 0.13 | 96.33 ± 0.08 |
OA | 74.19 ± 0.18 | 84.36 ± 0.14 | 82.37 ± 0.07 | 85.20 ± 0.15 | 93.41 ± 0.09 | 91.08 ± 0.11 | 95.74 ± 0.08 |
AA | 66.38 ± 0.19 | 75.66 ± 0.22 | 73.49 ± 0.24 | 76.97 ± 0.17 | 90.50 ± 0.06 | 84.29 ± 0.18 | 92.90 ± 0.07 |
Kappa | 60.30 ± 0.22 | 75.03 ± 0.13 | 72.18 ± 0.25 | 76.89 ± 0.23 | 89.10 ± 0.14 | 89.20 ± 0.13 | 90.07 ± 0.11 |
F-Measure | 73.55 ± 0.14 | 82.48 ± 0.17 | 80.90 ± 0.19 | 83.21 ± 0.13 | 93.39 ± 0.07 | 89.10 ± 0.17 | 94.87 ± 0.12 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, H.; Li, J.; He, L.; Wang, Y. Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification. Remote Sens. 2019, 11, 174. https://doi.org/10.3390/rs11020174
Liu H, Li J, He L, Wang Y. Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification. Remote Sensing. 2019; 11(2):174. https://doi.org/10.3390/rs11020174
Chicago/Turabian StyleLiu, Han, Jun Li, Lin He, and Yu Wang. 2019. "Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification" Remote Sensing 11, no. 2: 174. https://doi.org/10.3390/rs11020174
APA StyleLiu, H., Li, J., He, L., & Wang, Y. (2019). Superpixel-Guided Layer-Wise Embedding CNN for Remote Sensing Image Classification. Remote Sensing, 11(2), 174. https://doi.org/10.3390/rs11020174