Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification
"> Figure 1
<p>The structure of RNN.</p> "> Figure 2
<p>Flowchart of the Bi-CLSTM network for HSI classification. For a given pixel, a local cube surrounding it is first extracted, and then unfolded across the spectral domain. The unfolded images are fed into the Bi-CLSTM network one by one.</p> "> Figure 3
<p>The structure of CLSTM.</p> "> Figure 4
<p>The example of data augmentation. (<b>a</b>) the original image; (<b>b</b>–<b>d</b>) the images after rotation of 90, 180, and 270 degrees anticlockwise; (<b>e</b>) vertical flip of (<b>c</b>); (<b>f</b>) horizontal flip of (<b>d</b>); (<b>g</b>–<b>h</b>) the horizontally and vertically flipped images of (<b>c</b>,<b>d</b>).</p> "> Figure 5
<p>Indian Pines scene dataset. (<b>a</b>) false-color composite of the Indian Pines scene; (<b>b</b>) ground truth map containing 16 mutually exclusive land cover classes.</p> "> Figure 6
<p>Pavia University scene dataset. (<b>a</b>) false-color composite of the Pavia University scene; (<b>b</b>) ground truth map containing nine mutually exclusive land cover classes.</p> "> Figure 7
<p>KSC dataset. (<b>a</b>) false-color composite of the KSC. (<b>b</b>) ground truth map containing 13 mutually exclusive land cover classes.</p> "> Figure 8
<p>Classification maps using eight different methods on the Indian Pines dataset: (<b>a</b>) original; (<b>b</b>) RLDE; (<b>c</b>) MDA; (<b>d</b>) 2D-CNN; (<b>e</b>) 3D-CNN; (<b>f</b>) LSTM; (<b>g</b>) CNN+LSTM; (<b>h</b>) Bi-CLSTM.</p> "> Figure 9
<p>Classification maps using eight different methods on the Pavia University dataset: (<b>a</b>) original; (<b>b</b>) RLDE; (<b>c</b>) MDA; (<b>d</b>) 2D-CNN; (<b>e</b>) 3D-CNN; (<b>f</b>) LSTM; (<b>g</b>) CNN+LSTM; (<b>h</b>) Bi-CLSTM.</p> "> Figure 10
<p>Classification maps using eight different methods on the KSC dataset: (<b>a</b>) original; (<b>b</b>) RLDE; (<b>c</b>) MDA; (<b>d</b>) 2D-CNN; (<b>e</b>) 3D-CNN; (<b>f</b>) LSTM; (<b>g</b>) CNN+LSTM; (<b>h</b>) Bi-CLSTM.</p> ">
Abstract
:1. Introduction
2. Review of RNN and LSTM
3. Methodology
Algorithm 1: Algorithm for the Bi-CLSTM model. |
|
4. Experimental Results
4.1. Datasets
- Indian Pines: The first dataset was acquired by the AVIRIS sensor over the Indian Pine test site in northwestern Indiana, USA, on 12 June 1992 and it contains 224 spectral bands. We utilize 200 bands after removing four bands containing zero values and 20 noisy bands affected by water absorption. The spatial size of the image is pixels, and the spatial resolution is 20 m. The false-colour composite image and the ground truth map are shown in Figure 5. The available number of samples is 10,249 ranging from 20 to 2455 in each class.
- Pavia University: The second dataset was acquired by the reflective optics system imaging spectrometer (ROSIS) sensor during a flight campaign over Pavia, northern Italy, on 8 July 2002. The original image was recorded with 115 spectral channels ranging from 0.43 m to 0.86 m. After removing noisy bands, 103 bands are used. The image size is pixels with a spatial resolution of 1.3 m. A three band false-colour composite image and the ground truth map are shown in Figure 6. In the ground truth map, there are nine different classes of land covers with more than 1000 labeled pixels for each class.
- Kennedy Space Center (KSC): The third dataset was acquired by the AVIRIS sensor over Kennedy Space Center (KSC), Florida, on 23 March 1996. It contains 224 spectral bands. We utilize 176 bands of them after removing bands with water absorption and low signal noise ratio. The spatial size of the image is pixels, and the spatial resolution is 18 m. Discriminating different land covers in this dataset is difficult due to the similarity of spectral signatures among certain vegetation types. For classification purposes, thirteen classes representing the various land-cover types that occur in this environment are defined. Figure 7 demonstrates a false-colour composite image and the ground truth map.
4.2. Experimental Setup
4.3. Parameter Selection
4.4. Performance Comparison
5. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-Based Sparse Representation for Hyperspectral Image Classification. Remote Sens. 2017, 9, 139. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral-Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Song, H.; Sun, Y. Matrix-Based Discriminant Subspace Ensemble for Hyperspectral Image Spatial-Spectral Feature Fusion. IEEE Trans. Geosci. Remote Sens. 2015, 54, 783–794. [Google Scholar] [CrossRef]
- Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On Combining Multiple Features for Hyperspectral Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
- Xu, J.; Hang, R.; Liu, Q. Patch-Based Active Learning PTAL for Spectral-Spatial Classification on Hyperspectral Data. Int. J. Remote Sens. 2014, 35, 1846–1875. [Google Scholar] [CrossRef]
- Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Model-Based Fusion of Multi- and Hyperspectral Images Using PCA and Wavelets. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2652–2663. [Google Scholar] [CrossRef]
- Kuo, B.C.; Landgrebe, D.A. Nonparametric Weighted Feature Extraction for Classification. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1096–1105. [Google Scholar]
- Chen, H.T.; Chang, H.W.; Liu, T.L. Local Discriminant Embedding and Its Variants. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 846–853. [Google Scholar]
- Wang, Q.; Meng, Z.; Li, X. Locality Adaptive Discriminant Analysis for Spectral–Spatial Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2077–2081. [Google Scholar] [CrossRef]
- He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood Preserving Embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
- Villa, A.; Benediktsson, J.A.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification with Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef]
- Friedman, J.H. Regularized Discriminant Analysis. J. Am. Stat. Assoc. 1989, 84, 165–175. [Google Scholar] [CrossRef]
- Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
- Sugiyama, M. Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
- Fauvel, M.; Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Advances in Spectral-Spatial Classification of Hyperspectral Images. Proc. IEEE 2013, 101, 652–675. [Google Scholar] [CrossRef]
- Sun, L.; Wu, Z.; Liu, J.; Xiao, L. Supervised Spectral-Spatial Hyperspectral Image Classification with Weighted Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1490–1503. [Google Scholar] [CrossRef]
- Liu, J.; Wu, Z.; Wei, Z.; Xiao, L.; Sun, L. Spatial-Spectral Kernel Sparse Representation for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2462–2471. [Google Scholar] [CrossRef]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
- Mura, M.D.; Villa, A.; Benediktsson, J.A.; Chanussot, J. Classification of Hyperspectral Images by Using Extended Morphological Attribute Profiles and Independent Component Analysis. IEEE Geosci. Remote Sens. Lett. 2011, 8, 542–546. [Google Scholar] [CrossRef]
- Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of Hyperspectral Data from Urban Areas Based on Extended Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J. Spectral-Spatial Classification of Hyperspectral Imagery Based on Partitional Clustering Techniques. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2973–2987. [Google Scholar] [CrossRef]
- Jimenez, L.O.; Rivera-Medina, J.L.; Rodriguez-Diaz, E.; Arzuaga-Cruz, E. Integration of Spatial and Spectral Information by Means of Unsupervised Extraction and Classification for Homogenous Objects Applied to Multispectral and Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 844–851. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Benediktsson, J.A.; Chanussot, J.; Tilton, J.C. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4122–4132. [Google Scholar] [CrossRef]
- Jia, X.; Richards, J.A. Managing the Spectral-Spatial Mix in Context Classification Using Markov Random Fields. IEEE Geosci. Remote Sens. Lett. 2008, 5, 311–314. [Google Scholar] [CrossRef]
- Jackson, Q.; Landgrebe, D.A. Adaptive Bayesian Contextual Classification Based on Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2454–2463. [Google Scholar] [CrossRef]
- Wu, H.; Prasad, S. Convolutional Recurrent Neural Networks for Hyperspectral Data Classification. Remote Sens. 2017, 9, 298. [Google Scholar] [CrossRef]
- Liang, H.; Li, Q. Hyperspectral Imagery Classification Using Sparse Representations of Convolutional Neural Network Features. Remote Sens. 2016, 8, 99. [Google Scholar] [CrossRef]
- He, Z.; Liu, H.; Wang, Y.; Hu, J. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1042. [Google Scholar] [CrossRef]
- Ding, C.; Li, Y.; Xia, Y.; Wei, W.; Zhang, L.; Zhang, Y. Convolutional Neural Networks Based Hyperspectral Image Classification Method with Adaptive Kernels. Remote Sens. 2017, 9, 618. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Tao, C.; Pan, H.; Li, Y.; Zou, Z. Unsupervised Spectral-Spatial Feature Learning with Stacked Sparse Autoencoder for Hyperspectral Imagery Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2438–2442. [Google Scholar]
- Chen, Y.; Zhao, X.; Jia, X. Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735. [Google Scholar] [CrossRef] [PubMed]
- Xingjian, S.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems; 2015; pp. 802–810. Available online: papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learning-approach-for-precipitation-nowcasting.pdf (accessed on 15 December 2017).
- Williams, R.; Zipser, D. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Comput. 1989, 1, 270–280. [Google Scholar] [CrossRef]
- Rodriguez, P.; Wiles, J.; Elman, J.L. A Recurrent Neural Network That Learns to Count. Connect. Sci. 1999, 11, 5–40. [Google Scholar] [CrossRef]
- Cho, K.; Merrienboer, B.V.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. arXiv. 2014. Available online: https://arxiv.org/pdf/1406.1078 (accessed on 15 December 2017).
- Ranzato, M.; Szlam, A.; Bruna, J.; Mathieu, M.; Collobert, R.; Chopra, S. Video (Language) Modeling: A Baseline for Generative Models of Natural Videos. arXiv. 2014. Available online: https://arxiv.org/pdf/1412.6604 (accessed on 15 December 2017).
- Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. Gradient Flow in Recurrent Nets: The Difficulty of Learning Long-Term Dependencies. In A Field Guide to Dynamical Recurrent Neural Networks; IEEE Press, 2001; Available online: www.bioinf.jku.at/publications/older/ch7.pdf (accessed on 15 December 2017).
- Dumoulin, V.; Visin, F. A Guide to Convolution Arithmetic for Deep Learning. arXiv. 2016. Available online: https://arxiv.org/pdf/1603.07285 (accessed on 15 December 2017).
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems; 2014; pp. 3104–3112. Available online: papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf (accessed on 15 December 2017).
- Mikolov, T.; Karafiát, M.; Burget, L.; Cernocky, J.; Khudanpur, S. Recurrent neural network based language model. In Proceedings of the Conference of the International Speech Communication Association (INTERSPEECH 2010), Chiba, Japan, 26–30 September 2010; pp. 1045–1048. [Google Scholar]
- Graves, A.; Fernández, S.; Schmidhuber, J. Bidirectional LSTM networks for improved phoneme classification and recognition. In Proceedings of the Artificial Neural Networks: Formal Models and Their Applications (ICANN 2005), Warsaw, Poland, 11–15 September 2005; p. 753. [Google Scholar]
- Schuster, M.; Paliwal, K.K. Bidirectional Recurrent Neural Networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv. 2014. Available online: https://arxiv.org/pdf/1412.6980 (accessed on 15 December 2017).
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
- Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1082–1095. [Google Scholar] [CrossRef]
No. | Class | Total | Training | Test |
---|---|---|---|---|
C1 | Alfalfa | 46 | 5 | 41 |
C2 | Corn-notill | 1428 | 143 | 1285 |
C3 | Corn-mintill | 830 | 83 | 747 |
C4 | Corn | 237 | 24 | 213 |
C5 | Grass-pasture | 483 | 48 | 435 |
C6 | Grass-trees | 730 | 73 | 657 |
C7 | Grass-pasture-mowed | 28 | 3 | 25 |
C8 | Hay-windrowed | 478 | 48 | 430 |
C9 | Oats | 20 | 2 | 18 |
C10 | Soybean-notill | 972 | 97 | 875 |
C11 | Soybean-mintill | 2455 | 246 | 2209 |
C12 | Soybean-clean | 593 | 59 | 534 |
C13 | Wheat | 205 | 21 | 184 |
C14 | Woods | 1265 | 127 | 1138 |
C15 | Buildings-Grass-Trees-Drives | 386 | 39 | 347 |
C16 | Stone-Steel-Towers | 93 | 9 | 84 |
No. | Class | Total | Training | Test |
---|---|---|---|---|
C1 | Asphalt | 6631 | 548 | 6083 |
C2 | Meadows | 18,649 | 540 | 18,109 |
C3 | Gravel | 2099 | 392 | 1707 |
C4 | Trees | 3064 | 524 | 2540 |
C5 | Painted metal sheets | 1345 | 265 | 1080 |
C6 | Bare Soil | 5029 | 532 | 4497 |
C7 | Bitumen | 1330 | 375 | 955 |
C8 | Self-Blocking Bricks | 3682 | 514 | 3168 |
C9 | Shadows | 947 | 231 | 716 |
No. | Class | Total | Training | Test |
---|---|---|---|---|
C1 | Scrub | 761 | 76 | 685 |
C2 | Willow swamp | 243 | 24 | 219 |
C3 | Cabbage palm hammock | 256 | 26 | 230 |
C4 | Cabbage palm/oak hammock | 252 | 25 | 227 |
C5 | Slash pine | 161 | 16 | 145 |
C6 | Oak/broadleaf hammock | 229 | 23 | 206 |
C7 | Hardwood swamp | 105 | 11 | 94 |
C8 | Graminoid marsh | 431 | 43 | 388 |
C9 | Spartina marsh | 520 | 52 | 468 |
C10 | Cattail marsh | 404 | 40 | 364 |
C11 | Salt marsh | 419 | 42 | 377 |
C12 | Mud flats | 503 | 50 | 453 |
C13 | Water | 927 | 93 | 834 |
Direction | Convolution | MaxPooling | Dropout |
---|---|---|---|
Forward | 3 × 3 × 32 | 2 × 2 | 0.6 |
Backward | 3 × 3 × 32 | 2 × 2 | 0.6 |
Layer | Input | Conv-Output | Pool-Output |
F-CLSTM | |||
B-CLSTM | |||
Layer | Input | Output | |
Softmax | C |
Size | ||||
---|---|---|---|---|
OA(%) | 96.12 | 97.78 | 98.57 | 99.13 |
Network | F-CLSTM | Bi-CLSTM |
---|---|---|
OA(%) | 95.44 | 99.13 |
Operator | Yes | No |
---|---|---|
Dropout | 99.13 | 94.41 |
Data augmentation | 99.13 | 95.07 |
Label | Original | RLDE | MDA | 2D-CNN | 3D-CNN | LSTM | CNN+LSTM | Bi-CLSTM |
---|---|---|---|---|---|---|---|---|
OA | 77.44 ± 0.71 | 80.97 ± 0.60 | 92.31 ± 0.43 | 90.14 ± 0.78 | 95.30 ± 0.34 | 72.22 ± 3.65 | 94.15 ± 0.84 | 96.78 ± 0.35 |
AA | 74.94 ± 0.99 | 80.94 ± 2.12 | 89.54 ± 3.08 | 85.66 ± 3.24 | 92.02 ± 2.09 | 61.72 ± 3.38 | 90.30 ± 4.13 | 94.47 ± 0.83 |
74.32 ± 0.78 | 78.25 ± 0.70 | 91.21 ± 0.50 | 88.73 ± 0.90 | 94.65 ± 0.39 | 68.24 ± 4.13 | 93.50 ± 1.00 | 96.33 ± 0.40 | |
C1 | 56.96 ± 10.91 | 64.78 ± 15.25 | 73.17 ± 17.92 | 71.22 ± 15.75 | 92.68 ± 10.63 | 25.85 ± 17.47 | 91.06 ± 7.45 | 93.66 ± 6.12 |
C2 | 79.75 ± 2.77 | 78.39 ± 1.34 | 93.48 ± 1.42 | 90.10 ± 2.33 | 95.41 ± 2.58 | 66.60 ± 5.16 | 94.26 ± 2.58 | 96.84 ± 2.05 |
C3 | 66.60 ± 3.03 | 68.10 ± 2.16 | 84.02 ± 3.11 | 91.03 ± 2.73 | 96.16 ± 1.82 | 54.83 ± 8.31 | 95.29 ± 3.02 | 97.22 ± 2.02 |
C4 | 59.24 ± 7.14 | 70.80 ± 6.04 | 83.57 ± 2.23 | 85.73 ± 5.02 | 92.49 ± 4.48 | 43.94 ± 13.29 | 93.80 ± 7.08 | 96.71 ± 3.59 |
C5 | 90.31 ± 1.45 | 92.17 ± 1.97 | 96.69 ± 1.39 | 83.36 ± 5.75 | 87.89 ± 3.32 | 83.45 ± 4.45 | 84.78 ± 5.45 | 92.28 ± 3.82 |
C6 | 95.78 ± 1.64 | 94.90 ± 2.04 | 99.15 ± 0.51 | 91.99 ± 3.25 | 95.23 ± 2.21 | 87.76 ± 4.02 | 90.87 ± 6.10 | 99.39 ± 0.61 |
C7 | 80.00 ± 7.82 | 85.71 ± 6.68 | 93.60 ± 6.07 | 85.60 ± 12.20 | 86.67 ± 12.22 | 23.20 ± 20.47 | 84.00 ± 11.31 | 92.00 ± 9.80 |
C8 | 97.41 ± 0.84 | 99.12 ± 0.95 | 99.91 ± 0.13 | 97.35 ± 3.75 | 99.84 ± 0.27 | 95.40 ± 1.86 | 99.07 ± 1.57 | 99.91 ± 0.21 |
C9 | 35.00 ± 10.61 | 73.00 ± 21.10 | 63.33 ± 24.72 | 54.45 ± 23.70 | 72.22 ± 20.03 | 30.00 ± 15.01 | 55.56 ± 23.57 | 76.67 ± 21.66 |
C10 | 66.32 ± 3.18 | 69.73 ± 1.07 | 82.15 ± 2.23 | 75.38 ± 8.97 | 91.24 ± 1.77 | 71.29 ± 3.97 | 93.35 ± 5.45 | 95.93 ± 2.00 |
C11 | 70.77 ± 2.42 | 79.38 ± 0.56 | 92.76 ± 1.45 | 94.36 ± 0.48 | 97.59 ± 0.96 | 75.08 ± 5.53 | 98.82 ± 0.35 | 96.31 ± 1.46 |
C12 | 64.42 ± 3.92 | 72.28 ± 3.42 | 91.35 ± 2.26 | 78.73 ± 8.00 | 93.01 ± 3.09 | 54.49 ± 8.73 | 89.78 ± 4.43 | 93.33 ± 3.12 |
C13 | 95.41 ± 2.62 | 97.56 ± 1.38 | 99.13 ± 0.49 | 95.98 ± 4.82 | 96.56 ± 3.62 | 91.85 ± 3.90 | 95.65 ± 3.03 | 95.76 ± 3.72 |
C14 | 92.66 ± 1.77 | 92.36 ± 0.92 | 98.22 ± 0.39 | 96.80 ± 1.08 | 98.83 ± 0.89 | 90.37 ± 4.93 | 95.36 ± 4.35 | 99.49 ± 0.35 |
C15 | 60.88 ± 6.27 | 67.10 ± 6.39 | 87.84 ± 4.00 | 96.54 ± 2.54 | 90.01 ± 7.21 | 30.49 ± 2.02 | 95.53 ± 7.08 | 98.67 ± 1.11 |
C16 | 87.53 ± 1.95 | 89.68 ± 3.28 | 94.29 ± 6.43 | 81.90 ± 17.71 | 86.51 ± 5.36 | 62.86 ± 10.43 | 87.62 ± 4.00 | 87.38 ± 9.09 |
Label | Original | RLDE | MDA | 2D-CNN | 3D-CNN | LSTM | CNN+LSTM | Bi-CLSTM |
---|---|---|---|---|---|---|---|---|
OA | 89.12 ± 0.26 | 88.82 ± 0.25 | 96.95 ± 0.29 | 96.55 ± 0.85 | 97.65 ± 0.40 | 93.20 ± 0.71 | 97.11 ± 0.95 | 99.10 ± 0.16 |
AA | 90.50 ± 0.06 | 90.45 ± 0.06 | 96.86 ± 0.23 | 97.19 ± 0.51 | 97.74 ± 0.48 | 93.13 ± 0.42 | 98.27 ± 0.77 | 99.20 ± 0.17 |
85.81 ± 0.32 | 85.43 ± 0.31 | 95.93 ± 0.52 | 95.30 ± 1.13 | 96.80 ± 0.54 | 90.43 ± 0.91 | 96.09 ± 1.29 | 98.77 ± 0.21 | |
C1 | 87.25 ± 0.57 | 87.20 ± 0.52 | 96.69 ± 0.41 | 96.72 ± 1.48 | 95.33 ± 3.73 | 91.33 ± 2.05 | 98.54 ± 0.94 | 98.56 ± 0.58 |
C2 | 89.10 ± 0.54 | 88.40 ± 0.52 | 97.76 ± 0.47 | 96.31 ± 1.75 | 97.99 ± 0.94 | 94.58 ± 1.77 | 95.51 ± 1.92 | 99.23 ± 0.39 |
C3 | 81.99 ± 1.05 | 81.69 ± 0.80 | 90.69 ± 1.44 | 97.15 ± 1.58 | 95.27 ± 1.81 | 83.93 ± 3.72 | 97.64 ± 3.68 | 99.27 ± 0.47 |
C4 | 95.65 ± 0.59 | 95.79 ± 0.56 | 98.44 ± 0.27 | 96.16 ± 1.29 | 98.49 ± 1.07 | 97.78 ± 2.36 | 98.80 ± 0.60 | 98.21 ± 0.92 |
C5 | 99.76 ± 0.14 | 99.87 ± 0.08 | 100.00 ± 0.00 | 99.81 ± 0.32 | 98.67 ± 1.26 | 99.46 ± 0.24 | 99.28 ± 0.59 | 99.87 ± 0.15 |
C6 | 88.78 ± 1.01 | 88.67 ± 0.67 | 96.26 ± 0.45 | 94.87 ± 3.62 | 99.21 ± 0.74 | 91.73 ± 4.17 | 98.40 ± 1.05 | 99.56 ± 0.29 |
C7 | 85.92 ± 0.93 | 86.06 ± 1.04 | 97.95 ± 0.62 | 97.44 ± 1.68 | 97.90 ± 1.36 | 90.76 ± 2.85 | 98.91 ± 1.86 | 99.75 ± 0.30 |
C8 | 86.14 ± 1.02 | 86.42 ± 0.73 | 93.98 ± 0.97 | 98.23 ± 0.91 | 97.84 ± 2.55 | 88.78 ± 2.44 | 98.48 ± 1.16 | 99.82 ± 0.55 |
C9 | 99.92 ± 0.05 | 99.94 ± 0.06 | 100.00 ± 0.00 | 98.04 ± 0.96 | 98.97 ± 0.93 | 99.83 ± 0.23 | 98.83 ± 0.90 | 99.53 ± 0.47 |
Label | Original | RLDE | MDA | 2D-CNN | 3D-CNN | LSTM | CNN+LSTM | Bi-CLSTM |
---|---|---|---|---|---|---|---|---|
OA | 93.16 ± 0.38 | 93.50 ± 0.31 | 96.81 ± 0.17 | 92.55 ± 0.84 | 97.14 ± 0.49 | 84.96 ± 1.26 | 96.12 ± 0.45 | 98.29 ± 0.98 |
AA | 89.15 ± 0.55 | 90.09 ± 0.71 | 95.30 ± 0.83 | 89.20 ± 1.50 | 95.92 ± 0.64 | 82.87 ± 1.67 | 94.91 ± 0.86 | 97.77 ± 1.37 |
92.38 ± 0.42 | 92.77 ± 0.34 | 96.45 ± 0.18 | 91.69 ± 0.95 | 96.82 ± 0.55 | 83.24 ± 1.41 | 95.68 ± 0.50 | 98.10 ± 1.09 | |
C1 | 95.43 ± 2.54 | 95.30 ± 1.64 | 96.93 ± 1.03 | 94.86 ± 1.30 | 96.06 ± 1.24 | 96.53 ± 1.35 | 96.00 ± 2.53 | 98.87 ± 1.36 |
C2 | 91.44 ± 4.43 | 92.26 ± 5.48 | 97.26 ± 1.29 | 77.53 ± 5.05 | 98.48 ± 0.95 | 80.25 ± 2.67 | 89.04 ± 8.90 | 93.61 ± 5.93 |
C3 | 90.86 ± 6.55 | 88.44 ± 2.00 | 98.92 ± 0.30 | 84.52 ± 5.31 | 95.79 ± 3.93 | 95.36 ± 2.89 | 92.96 ± 6.29 | 99.35 ± 0.56 |
C4 | 79.52 ± 5.74 | 76.90 ± 5.48 | 90.31 ± 0.62 | 77.71 ± 11.85 | 90.89 ± 5.73 | 58.00 ± 11.42 | 87.31 ± 6.13 | 94.71 ± 2.07 |
C5 | 68.20 ± 7.71 | 77.64 ± 2.45 | 80.00 ± 7.80 | 80.97 ± 9.54 | 80.92 ± 2.87 | 60.00 ± 11.78 | 90.48 ± 3.91 | 97.24 ± 2.93 |
C6 | 67.34 ± 3.90 | 77.82 ± 0.72 | 92.47 ± 2.40 | 72.62 ± 14.78 | 97.25 ± 1.22 | 60.52 ± 7.35 | 93.30 ± 2.58 | 94.54 ± 9.01 |
C7 | 84.19 ± 5.33 | 82.67 ± 16.06 | 94.68 ± 6.01 | 93.19 ± 5.35 | 96.45 ± 6.14 | 57.45 ± 20.11 | 99.36 ± 1.43 | 99.74 ± 0.53 |
C8 | 95.17 ± 1.26 | 91.97 ± 2.39 | 96.26 ± 4.19 | 93.87 ± 2.41 | 96.65 ± 2.11 | 90.40 ± 4.87 | 92.11 ± 2.53 | 97.23 ± 3.16 |
C9 | 95.92 ± 1.69 | 98.08 ± 1.50 | 99.89 ± 0.15 | 95.85 ± 3.03 | 98.22 ± 0.62 | 92.74 ± 2.10 | 99.44 ± 1.24 | 97.81 ± 1.08 |
C10 | 96.78 ± 1.56 | 96.78 ± 1.20 | 98.35 ± 0.39 | 96.81 ± 1.79 | 98.72 ± 0.97 | 93.96 ± 3.60 | 95.71 ± 2.53 | 99.66 ± 0.52 |
C11 | 98.14 ± 0.87 | 98.23 ± 1.38 | 99.33 ± 0.19 | 94.27 ± 2.21 | 99.73 ± 0.46 | 98.63 ± 1.16 | 99.84 ± 0.36 | 98.94 ± 2.12 |
C12 | 95.90 ± 1.23 | 95.39 ± 1.31 | 94.59 ± 1.72 | 97.35 ± 2.09 | 97.86 ± 1.21 | 93.47 ± 2.63 | 98.28 ± 2.87 | 99.28 ± 0.89 |
C13 | 100.00 ± 0.00 | 99.68 ± 0.40 | 99.94 ± 0.08 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 | 100.00 ± 0.00 |
Dataset | 2D-CNN | 3D-CNN | LSTM | CNN+LSTM | Bi-CLSTM | |||||
---|---|---|---|---|---|---|---|---|---|---|
Train | Test | Train | Test | Train | Test | Train | Test | Train | Test | |
Indian Pines | 10.00 | 0.07 | 1435.33 | 70.57 | 75.00 | 0.55 | 260.00 | 3.38 | 535.50 | 12.62 |
Pavia University | 15.00 | 0.23 | 818.33 | 18.38 | 85.00 | 0.52 | 291.67 | 4.23 | 432.00 | 12.95 |
KSC | 5.00 | 0.03 | 183.33 | 3.73 | 25.00 | 0.18 | 65.00 | 1.07 | 112.50 | 2.65 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Q.; Zhou, F.; Hang, R.; Yuan, X. Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification. Remote Sens. 2017, 9, 1330. https://doi.org/10.3390/rs9121330
Liu Q, Zhou F, Hang R, Yuan X. Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification. Remote Sensing. 2017; 9(12):1330. https://doi.org/10.3390/rs9121330
Chicago/Turabian StyleLiu, Qingshan, Feng Zhou, Renlong Hang, and Xiaotong Yuan. 2017. "Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification" Remote Sensing 9, no. 12: 1330. https://doi.org/10.3390/rs9121330
APA StyleLiu, Q., Zhou, F., Hang, R., & Yuan, X. (2017). Bidirectional-Convolutional LSTM Based Spectral-Spatial Feature Learning for Hyperspectral Image Classification. Remote Sensing, 9(12), 1330. https://doi.org/10.3390/rs9121330