Spatial Feature Enhancement and Attention-Guided Bidirectional Sequential Spectral Feature Extraction for Hyperspectral Image Classification
"> Figure 1
<p>The feature extraction principle of 2D convolution.</p> "> Figure 2
<p>The architecture of the proposed IFEE. In the spatial feature extraction branch, the adaptive guided filtering process supplements the edge information to the whole HSI input. Then, the filtered image gets improved in spatial resolution by the image enhancement module, and the enhanced HSI patches are sent to the MMFE module to obtain multi-scale spatial features. Spectral features are extracted by the attention-guided bidirectional sequential spectral feature extraction module with two forms of inputs: patch input and vector input. Spatial and spectral features will be concatenated and then classified through Softmax.</p> "> Figure 3
<p>The images before and after adaptive guided filtering processing on SS dataset. (<b>a</b>) Input image; (<b>b</b>) Output image.</p> "> Figure 4
<p>The image enhancement module with 2DCNNs. During the pre-processing stage, the input image cube P is first reduced by a certain proportion through bicubic interpolation and obtains a smaller image cube B, which is enlarged and restored to image P+ with the same size as the original input P. The three 2D convolution layers with different sizes of convolution kernels after the pre-processing sequentially enhance the spatial information with the reconstruction of image P+; thus, the output H has higher spatial resolution compared to input image P. We take mean square error (MSE) as the loss function for back propagation and parameter update.</p> "> Figure 5
<p>The specific structure of BiLSTM and LSTM. In BiLSTM, forward and reverse LSTMs perform spectral feature extraction on the input vector in both front and back directions. Output (1) and Output (2) are the output feature vectors of the reverse processing and the forward processing, respectively, which will be concatenated as the final output feature vector Y. LSTM is mainly composed of the forget gate, input gate, and output gate, which are combinations of the sigmoid and tanh functions. Input h (t − 1) means the output of the last cell, and h (t) is the output we need. C (t−1) adds the memory state of the last cell and is updated to C (t) as an output, and r (t) is the input of the current cell. (<b>a</b>) BiLSTM; (<b>b</b>) LSTM.</p> "> Figure 6
<p>The structure of the attention mechanism. We add the output vectors after the input patch goes through a two-branch pooling process to combine the global and local spectral information. Then, there are two convolution blocks for the attention weight which will be multiplied by the input vector transformed into a sequence. By synthesizing the pooled sequence after addition and the input vector before and after weight processing, we obtain the output vector for the process of BiLSTM.</p> "> Figure 7
<p>Classification maps generated by different models on the IP dataset. (<b>a</b>) Ground Truth; (<b>b</b>) SVM; (<b>c</b>) 3DCNN; (<b>d</b>) LSTM; (<b>e</b>) MSCNN; (<b>f</b>) HybridSN; (<b>g</b>) SSDF; (<b>h</b>) SSFTT; (<b>i</b>) IFEE.</p> "> Figure 8
<p>Classification maps generated by different models on the UP dataset. (<b>a</b>) Ground Truth; (<b>b</b>) SVM; (<b>c</b>) 3DCNN; (<b>d</b>) LSTM; (<b>e</b>) MSCNN; (<b>f</b>) HybridSN; (<b>g</b>) SSDF; (<b>h</b>) SSFTT; (<b>i</b>) IFEE.</p> "> Figure 9
<p>Classification maps generated by different models on the SS dataset. (<b>a</b>) Ground Truth; (<b>b</b>) SVM; (<b>c</b>) 3DCNN; (<b>d</b>) LSTM; (<b>e</b>) MSCNN; (<b>f</b>) HybridSN; (<b>g</b>) SSDF; (<b>h</b>) SSFTT; (<b>i</b>) IFEE.</p> "> Figure 9 Cont.
<p>Classification maps generated by different models on the SS dataset. (<b>a</b>) Ground Truth; (<b>b</b>) SVM; (<b>c</b>) 3DCNN; (<b>d</b>) LSTM; (<b>e</b>) MSCNN; (<b>f</b>) HybridSN; (<b>g</b>) SSDF; (<b>h</b>) SSFTT; (<b>i</b>) IFEE.</p> "> Figure 10
<p>The 2D gray images before and after enhancement processing on IP, UP, and SS datasets. (<b>a</b>) IP before; (<b>b</b>) UP before; (<b>c</b>) SS before; (<b>d</b>) IP after; (<b>e</b>) UP after; (<b>f</b>) SS after.</p> "> Figure 11
<p>The curves of OA values of IFEE and other methods using two different ways of selecting training samples. (<b>a</b>–<b>c</b>) Curves with different proportions of training samples in three datasets. (<b>d</b>–<b>f</b>) Curves with a different number of training samples in each class in three datasets. (<b>a</b>) Different proportions of training samples in IP. (<b>b</b>) Different proportions of training samples in UP. (<b>c</b>) Different proportions of training samples in SS. (<b>d</b>) Different numbers of training samples in each class in IP. (<b>e</b>) A different number of training samples in each class in UP. (<b>f</b>) Different numbers of training samples in each class in SS.</p> ">
Abstract
:1. Introduction
- (1)
- As the effect of traditional guided filtering is limited with fixed regularization coefficients, we introduce an adaptive guided filtering module to preserve and enhance the subtle features and edge information of the input HSIs, which may be lost in dimensionality reduction operations.
- (2)
- The low spatial resolution of HSIs makes it hard to obtain sufficient spatial features and details of ground objects, thus limiting the classification performance. As a response, we propose a lightweight image enhancement module composed of 2DCNNs to improve the spatial resolution of the feature map after adaptive guided filtering processing and enhance spatial features, preparing more detailed spatial information for feature extraction.
- (3)
- To reduce the noise and mitigate the impact of data redundancy in spectral dimension, we design a spectral attention mechanism creatively taking spatial patches and sequences randomly selected from original HSIs as two different inputs, which can emphasize representative spectral channels as well as suppressing noise. The spectral attention mechanism is combined with bidirectional sequential spectral features extraction, which further improves the classification accuracy.
- (4)
- In consideration of the situation that labeled samples are limited in reality, we have discussed the classification performance of the proposed IFEE with a small number of training samples. Our method can get the best classification results on three datasets with only five labeled samples per class for training.
2. Materials and Methods
2.1. Guided Filtering
2.2. Image Enhancement and 2D Convolution
2.3. Attention Mechanism
2.4. Methods
2.4.1. Adaptive Guided Filtering Module
2.4.2. Image Enhancement Module
2.4.3. Attention-Guided Bidirectional Sequential Spectral Feature Extraction Module
3. Results
3.1. Experimental Settings
3.2. Datasets
3.3. Experimental Results
4. Discussion
4.1. Effectiveness of Dual-Branch Structure
4.2. Effectiveness of Different Modules
4.3. Analysis of BiLSTM with Spectral Attention Mechanism Module
4.4. Performance under Limited Labeled Samples
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Sun, G.; Zhang, A.; Ren, J.; Ma, J.; Wang, P.; Zhang, Y.; Jia, X. Gravitation-based edge detection in hyperspectral images. Remote Sens. 2017, 9, 592. [Google Scholar] [CrossRef]
- Yang, X.; Yu, Y. Estimating soil salinity under various moisture conditions: An experimental study. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2525–2533. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417–441. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
- Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recogn. 2010, 43, 2367–2379. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar]
- Guo, Y.; Cao, H.; Bai, J.; Bai, Y. High efficient deep feature extraction and classification of spectral-spatial hyperspectral image using cross domain convolutional neural networks. IEEE J. Sel. Topics in Appl. Earth Observ. Remote Sens. 2019, 12, 345–356. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Mou, L.; Ghamisi, P.; Zhu, X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Wu, B.; Zhu, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.; Kweon, I. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Mu, C.; Guo, Z.; Liu, Y. A multi-scale and multi-level spectral-spatial feature fusion network for hyperspectral image classification. Remote Sens. 2020, 12, 125. [Google Scholar] [CrossRef]
- Swalpa, K.R.; Gopal, K.; Shiv, R.D.; Bidyut, B.C. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 0196–2892. [Google Scholar] [CrossRef]
- Mu, C.; Liu, Y.; Liu, Y. Hyperspectral image spectral-spatial classification method based on deep adaptive feature fusion. Remote Sens. 2021, 13, 746. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 6000–6010. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- He, J.; Zhao, L.; Yang, H.; Zhang, M.; Li, W. HSI-BERT: Hyperspectral image classification using the bidirectional encoder representation from transformers. IEEE Trans. Geosci. Remote Sens. 2020, 58, 165–178. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. Spectralformer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Zhang, J.; Meng, Z.; Zhao, F.; Liu, H.; Chang, Z. Convolution transformer mixer for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral image transformer classification networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Song, L.; Feng, Z.; Yang, S.; Zhang, X.; Jiao, L. Interactive spectral-spatial transformer for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2024, 1, 1. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, X.; Jiang, B.; Chen, L.; Luo, B. SemanticFormer: Hyperspectral image classification via semantic transformer. Pattern Recognit. Lett. 2024, 179, 1–8. [Google Scholar] [CrossRef]
- Yuan, W.; Meng, C.; Bai, X. Weighted side-window based gradient guided image filtering. Pattern Recognit. 2023, 146, 110006. [Google Scholar] [CrossRef]
- Tyagi, V. Image enhancement in spatial domain. In Understanding Digital Image Processing; CRC Press: Boca Raton, FL, USA, 2018; pp. 36–56. [Google Scholar]
- Zhang, X.; Qin, H.; Yu, Y.; Yan, X.; Yang, S.; Wang, G. Unsupervised low-light image enhancement via virtual diffraction information in frequency domain. Remote Sens. 2023, 15, 3580. [Google Scholar] [CrossRef]
- Yao, Z.; Fan, G.; Fan, J.; Gan, M.; Chen, C. Spatial-frequency dual-domain feature fusion network for low-light remote sensing image enhancement. IEEE Trans. Geosci. Remote Sens. 2024, 1, 1. [Google Scholar] [CrossRef]
- Li, Y.; Liu, Z.; Yang, J.; Zhang, H. Wavelet transform feature enhancement for semantic segmentation of remote sensing images. Remote Sens. 2023, 15, 5644. [Google Scholar] [CrossRef]
- Ye, X.; He, Z.; Heng, W.; Li, Y. Toward understanding the effectiveness of attention mechanism. AIP Adv. 2023, 13, 035019. [Google Scholar] [CrossRef]
- Feng, Y.; Zhu, X.; Zhang, X.; Li, Y.; Lu, H. PAMSNet: A medical image segmentation network based on spatial pyramid and attention mechanism. Biomed. Signal Proces. 2024, 94, 106285. [Google Scholar] [CrossRef]
- Yu, Y.; Zhang, Y.; Cheng, Z.; Song, Z.; Tang, C. Multi-scale spatial pyramid attention mechanism for image recognition: An effective approach. Eng. Appl. Artif. Intel. 2024, 133, 108261. [Google Scholar] [CrossRef]
- Kang, J.; Zhang, Y.; Liu, X.; Cheng, Z. Hyperspectral image classification using spectral-spatial double-branch attention mechanism. Remote Sens. 2024, 16, 193. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, W.; Li, Y.; Jia, Y.; Xu, Y.; Ling, Y.; Ma, J. An attention mechanism module with spatial perception and channel information interaction. Complex Intell. Syst. 2024, 10, 5427–5444. [Google Scholar] [CrossRef]
- An, W.; Wu, G. Hybrid spatial-channel attention mechanism for cross-age face recognition. Electronics 2024, 13, 1257. [Google Scholar] [CrossRef]
- Li, M.; Liu, Y.; Xue, G.; Huang, Y.; Yang, G. Exploring the relationship between center and neighborhoods: Central vector oriented self-similarity network for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1979–1993. [Google Scholar] [CrossRef]
- Zhang, L.; Ruan, C.; Zhao, J.; Huang, L. Triple-attention residual networks for hyperspectral image classification. In Proceedings of the International Conference on Computer Vision, Image and Deep Learning (CVIDL), Zhuhai, China, 19–21 April 2024; pp. 1065–1070. [Google Scholar]
- Meng, Z.; Yan, Q.; Zhao, F.; Liang, M. Hyperspectral image classification with dynamic spatial-spectral attention network. In Proceedings of the Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Athens, Greece, 31 October–2 November 2023; pp. 2158–6268. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
- Zhou, H.; Zhang, X.; Zhang, C.; Ma, Q. Vision transformer with contrastive learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. Lett. 2023, 20, 1. [Google Scholar] [CrossRef]
- Li, Z.; Huang, W.; Wang, L.; Xin, Z.; Qiao, M. CNN and Transformer interaction network for hyperspectral image classification. Int. J. Remote Sens. 2023, 44, 5548–5573. [Google Scholar] [CrossRef]
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Qtn: Quaternion transformer network for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2023, 10, 1109. [Google Scholar] [CrossRef]
- Jia, S.; Wang, Y.; Jiang, S.; He, R. A center-masked transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
- Ahmad, M.; Ghous, U.; Usama1, M.; Mazzara, M. WaveFormer: Spectral–spatial wavelet transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Zhao, Z.; Xu, X.; Li, S.; Plaza, A. Hyperspectral image classification using groupwise separable convolutional vision transformer network. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–17. [Google Scholar] [CrossRef]
- Huang, K.; Deng, X.; Geng, J.; Jiang, W. Self-attention and mutual-attention for few-shot hyperspectral image classification. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021; pp. 2153–6996. [Google Scholar]
- Tang, P.; Zhang, M.; Liu, Z.; Song, R. Double attention transformer for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
- Nie, F.; Huang, H.; Ding, C.; Luo, D.; Wang, H. Robust principal component analysis with non-greedy L1-norm maximization. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Barcelona, Spain, 16–22 July 2011; pp. 1433–1438. [Google Scholar]
- Chen, G.; Krzyzak, A.; Qian, S. Noise robust hyperspectral image classification with MNF-based edge preserving features. Image Anal. Stereol. 2023, 42, 93–99. [Google Scholar] [CrossRef]
- Li, X.; Rodolfo, C. BiLSTM model with attention mechanism for sentiment classification on Chinese mixed text comments. IEEE Access 2023, 11, 26199–26210. [Google Scholar]
- Yang, A.; Li, M.; Ding, Y.; Hong, D.; Lv, Y.; He, Y. GTFN: GCN and transformer fusion with spatial-spectral features for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 0196–2892. [Google Scholar] [CrossRef]
Layer | Input | Convolution Kernel | Output |
---|---|---|---|
Conv1 | |||
ReLU1 | - | ||
Conv2 | |||
ReLU2 | - | ||
Conv3 |
Class | Category | Training | Validation | Test |
---|---|---|---|---|
1 | Alfalfa | 2 | 2 | 42 |
2 | Corn | 43 | 43 | 1342 |
3 | Corn-mintill | 25 | 25 | 780 |
4 | Corn | 8 | 8 | 221 |
5 | Grass-pasture | 15 | 15 | 453 |
6 | Grass-tree | 22 | 22 | 686 |
7 | Grass-pasture-moved | 1 | 1 | 26 |
8 | Hay-windrowed | 15 | 15 | 448 |
9 | Oats | 1 | 1 | 18 |
10 | Soybean-notill | 30 | 30 | 912 |
11 | Soybean-mintill | 74 | 74 | 2307 |
12 | Soybean-clean | 18 | 18 | 557 |
13 | Wheat | 7 | 7 | 191 |
14 | Woods | 38 | 38 | 1189 |
15 | Buildings-Grass-Trees-Drives | 12 | 12 | 362 |
16 | Stone-Steel-Towers | 3 | 3 | 87 |
Total | 314 | 314 | 9621 |
Class | Category | Training | Validation | Test |
---|---|---|---|---|
1 | ASphalt | 67 | 67 | 6497 |
2 | Meandows | 187 | 187 | 18,275 |
3 | Gravel | 21 | 21 | 2057 |
4 | Trees | 31 | 31 | 3002 |
5 | Painted metal sheet | 14 | 14 | 1317 |
6 | Bare Soil | 51 | 51 | 4927 |
7 | Bitumen | 14 | 14 | 1302 |
8 | Self-Blocking Bricks | 37 | 37 | 3608 |
9 | Shadows | 10 | 10 | 927 |
Total | 432 | 432 | 41,912 |
Class | Category | Training | Validation | Test |
---|---|---|---|---|
1 | Brocoli_green_weeds_1 | 21 | 21 | 1967 |
2 | Brocoli_green_weeds_2 | 38 | 38 | 3650 |
3 | Fallow | 20 | 20 | 1936 |
4 | Fallow_rough_plow | 14 | 14 | 1366 |
5 | Fallow_smooth | 27 | 27 | 2624 |
6 | Stubble | 40 | 40 | 3879 |
7 | Celery | 36 | 36 | 3507 |
8 | Grapes_untrained | 113 | 113 | 11,045 |
9 | Soil_vinyard_develop | 63 | 63 | 6077 |
10 | Corn_senesced_green_weeds | 33 | 33 | 3212 |
11 | Lettuce_romaine_4wk | 11 | 11 | 1046 |
12 | Lettuce_romaine_5wk | 20 | 20 | 1887 |
13 | Lettuce_romaine_6wk | 10 | 10 | 896 |
14 | Lettuce_romaine_7wk | 11 | 11 | 1048 |
15 | Vinyard_untrained | 73 | 73 | 7122 |
16 | Vinyard_vertical_trellis | 19 | 19 | 1769 |
Total | 549 | 549 | 53,031 |
Class | SVM | 3DCNN | LSTM | MSCNN | HybridSN | SSDF | SSFTT | IFEE |
---|---|---|---|---|---|---|---|---|
1 | 8.61 | 42.22 | 23.53 | 70.22 | 16.89 | 95.67 | 36.44 | 97.66 |
2 | 63.84 | 67.79 | 64.84 | 94.65 | 80.71 | 90.76 | 86.99 | 91.75 |
3 | 51.89 | 56.23 | 61.06 | 91.02 | 84.57 | 91.58 | 89.42 | 93.46 |
4 | 25.04 | 47.39 | 42.17 | 84.61 | 70.96 | 92.77 | 92.52 | 93.57 |
5 | 69.85 | 78.42 | 91.77 | 82.09 | 94.88 | 90.27 | 97.23 | 95.37 |
6 | 86.83 | 94.19 | 79.66 | 98.39 | 97.85 | 97.31 | 97.20 | 95.34 |
7 | 0.00 | 0.00 | 25.00 | 0.00 | 55.56 | 57.59 | 100.00 | 47.87 |
8 | 98.41 | 97.28 | 82.24 | 98.06 | 99.96 | 98.98 | 98.66 | 99.78 |
9 | 0.00 | 0.00 | 0.00 | 0.00 | 91.58 | 78.53 | 93.68 | 96.98 |
10 | 65.56 | 67.34 | 73.38 | 87.85 | 89.65 | 94.62 | 93.04 | 95.63 |
11 | 74.37 | 75.35 | 65.86 | 96.59 | 95.22 | 97.78 | 98.16 | 98.26 |
12 | 42.57 | 43.23 | 56.11 | 87.99 | 64.90 | 82.85 | 82.61 | 87.88 |
13 | 90.45 | 87.64 | 80.25 | 98.39 | 81.71 | 97.53 | 94.27 | 97.43 |
14 | 92.39 | 93.52 | 88.71 | 99.85 | 98.53 | 99.27 | 98.94 | 98.34 |
15 | 33.28 | 51.84 | 50.25 | 85.23 | 85.83 | 94.71 | 92.73 | 93.65 |
16 | 45.05 | 76.04 | 86.56 | 99.34 | 72.22 | 88.04 | 77.78 | 92.85 |
OA (%) | 69.13 | 73.09 | 71.24 | 93.22 | 88.99 | 94.30 | 93.64 | 95.42 |
1.02 | 1.68 | 0.01 | 0.91 | 1.11 | 1.83 | 0.68 | 0.16 | |
AA (%) | 52.97 | 61.16 | 60.71 | 79.43 | 80.06 | 90.52 | 89.36 | 92.03 |
0.93 | 3.35 | 0.04 | 0.84 | 1.47 | 1.49 | 1.88 | 0.65 | |
Kappa (×100) | 64.58 | 69.25 | 66.90 | 92.24 | 87.40 | 93.51 | 92.73 | 94.78 |
1.10 | 2.00 | 0.01 | 1.04 | 1.23 | 2.08 | 0.78 | 0.19 |
Class | SVM | 3DCNN | LSTM | MSCNN | HybridSN | SSDF | SSFTT | IFEE |
---|---|---|---|---|---|---|---|---|
1 | 86.98 | 86.18 | 80.84 | 97.22 | 93.43 | 97.10 | 90.50 | 97.54 |
2 | 97.57 | 96.91 | 85.43 | 99.17 | 99.53 | 98.86 | 99.57 | 99.68 |
3 | 50.95 | 55.48 | 53.80 | 86.53 | 83.88 | 93.39 | 92.94 | 94.47 |
4 | 86.57 | 91.40 | 91.02 | 93.44 | 80.38 | 97.86 | 73.53 | 97.63 |
5 | 98.77 | 98.32 | 97.18 | 99.86 | 99.85 | 99.00 | 96.88 | 95.41 |
6 | 71.62 | 64.52 | 63.92 | 98.96 | 99.14 | 97.24 | 100.00 | 99.42 |
7 | 77.89 | 66.58 | 49.61 | 95.52 | 97.51 | 80.97 | 96.70 | 99.62 |
8 | 85.61 | 88.48 | 73.75 | 91.56 | 82.02 | 86.41 | 96.55 | 94.80 |
9 | 99.83 | 91.88 | 99.13 | 91.43 | 68.71 | 97.64 | 61.13 | 92.38 |
OA (%) | 88.25 | 87.27 | 80.51 | 96.89 | 94.16 | 96.20 | 94.74 | 98.19 |
0.66 | 0.98 | 0.01 | 0.69 | 0.90 | 0.19 | 1.58 | 0.22 | |
AA (%) | 83.98 | 82.19 | 77.19 | 94.86 | 89.38 | 94.27 | 89.76 | 96.77 |
1.80 | 1.53 | 0.01 | 0.85 | 1.57 | 0.77 | 1.59 | 0.67 | |
Kappa (×100) | 84.21 | 82.88 | 73.78 | 95.88 | 92.22 | 94.96 | 93.02 | 97.59 |
0.92 | 1.36 | 0.01 | 0.92 | 1.20 | 0.26 | 2.06 | 0.29 |
Class | SVM | 3DCNN | LSTM | MSCNN | HybridSN | SSDF | SSFTT | IFEE |
---|---|---|---|---|---|---|---|---|
1 | 98.58 | 85.54 | 98.68 | 99.98 | 99.76 | 99.47 | 99.95 | 100.00 |
2 | 98.19 | 99.49 | 98.96 | 100.00 | 99.99 | 99.90 | 100.00 | 99.50 |
3 | 94.05 | 94.50 | 85.03 | 95.91 | 99.60 | 97.59 | 99.78 | 99.18 |
4 | 96.54 | 96.29 | 97.61 | 97.45 | 99.42 | 99.31 | 99.36 | 98.81 |
5 | 96.55 | 97.20 | 95.93 | 97.63 | 98.94 | 99.05 | 98.36 | 97.68 |
6 | 99.23 | 98.35 | 99.85 | 99.66 | 99.54 | 99.66 | 99.99 | 99.98 |
7 | 99.51 | 97.05 | 93.57 | 99.60 | 99.93 | 98.33 | 99.93 | 100.00 |
8 | 86.30 | 83.41 | 75.72 | 94.32 | 95.54 | 95.97 | 98.71 | 99.12 |
9 | 99.39 | 98.10 | 97.93 | 99.77 | 99.99 | 99.48 | 99.99 | 99.69 |
10 | 87.58 | 87.22 | 83.72 | 95.22 | 98.76 | 98.89 | 98.88 | 99.63 |
11 | 89.81 | 90.55 | 77.57 | 94.67 | 98.22 | 86.76 | 98.75 | 97.52 |
12 | 98.31 | 96.68 | 95.99 | 98.78 | 99.79 | 99.65 | 99.95 | 96.75 |
13 | 97.95 | 95.46 | 90.24 | 93.30 | 90.80 | 96.51 | 96.87 | 99.19 |
14 | 90.57 | 93.68 | 97.63 | 87.22 | 97.26 | 94.83 | 98.47 | 99.62 |
15 | 62.23 | 74.29 | 71.26 | 88.91 | 90.70 | 94.22 | 95.24 | 97.39 |
16 | 91.67 | 83.81 | 94.09 | 98.96 | 99.32 | 98.79 | 99.34 | 99.99 |
OA (%) | 89.75 | 89.73 | 87.58 | 96.05 | 97.34 | 97.47 | 98.78 | 99.08 |
0.53 | 1.02 | 0.01 | 0.25 | 0.17 | 0.72 | 0.36 | 0.04 | |
AA (%) | 92.90 | 91.98 | 90.86 | 96.25 | 97.97 | 97.40 | 98.97 | 99.11 |
0.75 | 1.09 | 0.01 | 0.63 | 0.67 | 0.68 | 0.39 | 0.20 | |
Kappa (×100) | 88.56 | 88.58 | 86.16 | 95.60 | 97.04 | 97.18 | 98.64 | 98.98 |
0.60 | 1.13 | 0.01 | 0.28 | 0.18 | 0.81 | 0.40 | 0.05 |
Methods | IP | UP | SS |
---|---|---|---|
Spatial Branch | 94.65 | 97.51 | 98.77 |
Spectral Branch | 75.63 | 83.63 | 92.23 |
IFEE | 95.42 | 98.19 | 99.08 |
AGF | SIE | Bi + SAM | IP | UP | SS |
---|---|---|---|---|---|
× | √ | √ | 93.86 | 93.40 | 94.89 |
√ | × | √ | 95.05 | 97.77 | 98.96 |
√ | √ | × | 94.65 | 97.51 | 98.77 |
√ | √ | √ | 95.42 | 98.19 | 99.08 |
Methods | IP | UP | SS |
---|---|---|---|
IFEE × SAM | 95.35 | 98.14 | 98.82 |
IFEE × Bi | 95.21 | 97.97 | 98.68 |
IFEE × Bi + LSTM | 94.78 | 97.27 | 98.79 |
IFEE | 95.42 | 98.19 | 99.08 |
Methods | IP | UP | SS |
---|---|---|---|
SB-P | 75.63 | 83.63 | 92.23 |
SB-S | 56.35 | 73.76 | 65.10 |
IFEE-P | 95.42 | 98.19 | 99.08 |
IFEE-S | 94.94 | 97.82 | 98.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Jiang, S.; Liu, Y.; Mu, C. Spatial Feature Enhancement and Attention-Guided Bidirectional Sequential Spectral Feature Extraction for Hyperspectral Image Classification. Remote Sens. 2024, 16, 3124. https://doi.org/10.3390/rs16173124
Liu Y, Jiang S, Liu Y, Mu C. Spatial Feature Enhancement and Attention-Guided Bidirectional Sequential Spectral Feature Extraction for Hyperspectral Image Classification. Remote Sensing. 2024; 16(17):3124. https://doi.org/10.3390/rs16173124
Chicago/Turabian StyleLiu, Yi, Shanjiao Jiang, Yijin Liu, and Caihong Mu. 2024. "Spatial Feature Enhancement and Attention-Guided Bidirectional Sequential Spectral Feature Extraction for Hyperspectral Image Classification" Remote Sensing 16, no. 17: 3124. https://doi.org/10.3390/rs16173124