AMHFN: Aggregation Multi-Hierarchical Feature Network for Hyperspectral Image Classification
<p>Overall framework of the proposed AMHFN. Specifically, the AMHFN comprises a stem layer to extract shallow features and three stages to capture the local and global spatial-spectral representations. The stem layer consists of two convolution operations to obtain the shallow local features. Each stage includes LPEM, MSCE, and MSGE to achieve the subtle spatial-spectral information. It is noted that MS is an abbreviation for multi-scale, MSCE is an abbreviation for multi-scale convolution extraction, and MSGE is an abbreviation for multi-scale global extraction.</p> "> Figure 2
<p>The structure of the <math display="inline"><semantics> <mrow> <mi>E</mi> <mi>C</mi> <mi>A</mi> </mrow> </semantics></math>, where <span class="html-italic">H</span>, <span class="html-italic">W</span>, and <span class="html-italic">C</span> represent the height, width, and number of channels of the feature map, respectively. <math display="inline"><semantics> <mi>σ</mi> </semantics></math> represents the operation of the Sigmoid activation function.</p> "> Figure 3
<p>The operation process of Partial Convolution (PConv). “×” represents the operation of convolution.</p> "> Figure 4
<p>Structur of ESA mechanism.</p> "> Figure 5
<p>Structure of self-attention (<math display="inline"><semantics> <mrow> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math>) and <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>H</mi> <mi>S</mi> <mi>A</mi> </mrow> </semantics></math> mechanisms.</p> "> Figure 6
<p>Impact of different kernel sizes for the AA on three datasets.</p> "> Figure 7
<p>AA of different models with different percentages of training samples on three datasets.</p> "> Figure 8
<p>Classification maps obtained using different methods on the WHU-Hi-LongKou dataset (with 2% training samples).</p> "> Figure 9
<p>Classification maps obtained using different methods on the Houston 2013 dataset (with 10% training samples).</p> "> Figure 10
<p>Classification maps obtained by different methods on the Pavia University dataset (with 1% training samples).</p> "> Figure 11
<p>Visualization of t-SNE data analysis on the Houston 2013 dataset.</p> ">
Abstract
:1. Introduction
- 1.
- Most transformer-based methods explore global spatial dependencies, ignoring those in the spectral dimension. Existing transformer-based HSI classification methods struggle to capture long spectral dependencies, hindering performance improvements.
- 2.
- Now, most transformer-based methods may not be able to further refine the local feature during the training stage. This is mainly because transformers directly process the local spatial features through a multi-head self-attention mechanism, resulting in limiting the further exploitation of local features.
- 1.
- We propose a novel hybrid hyperspectral image classification method, called Aggregation Multi-Hierarchical Feature Network (AMHFN), that captures and aggregates local hierarchical features and explores global dependencies of spectral information and prominent local spatial features.
- 2.
- We propose Local-Pixel Embedding module (LPEM) to exploit the refined local contextual spatial-spectral features. Specifically, the proposed LPEM consists of one grouped convolution layer to capture the hierarchical spatial-spectral features.
- 3.
- We further propose two modules to capture and aggregate the multi-scale hierarchical features. A Multi-Scale Convolutional Extraction (MSCE) module captures local spectral-spatial fusion information, while a Multi-Scale Global Extraction (MSGE) module captures and integrates global dependencies.
- 4.
- Finally, evaluated on three public HSI benchmarks, the proposed AMHFN outperforms other HSI classification methods.
2. Related Works
2.1. HSI Classification Methods Based on CNNs
2.2. HSI Classification Methods Based on Transformers
2.3. HSI Classification Methods Based on Combining CNN and Transformer
3. Proposed Methodology
Algorithm 1 AMHFN Implementation Process. |
|
3.1. Multi-Scale Convolutional Layer
3.2. Multi-Scale Convolutional Extraction Module
3.2.1. ECA-Based Layer
3.2.2. ESA-Based Layer
3.3. Multi-Scale Global Extraction Module
4. Experiments
4.1. Datasets
4.1.1. WHU-Hi-LongKou Dataset
4.1.2. Pavia University Dataset
4.1.3. Houston 2013 Dataset
4.2. Experimental Setup
4.2.1. Evaluation Indicators
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Healthy Grass | 125 | 1126 |
2 | Stressed Grass | 125 | 1129 |
3 | Synthetic Grass | 70 | 627 |
4 | Trees | 124 | 1120 |
5 | Soil | 124 | 1118 |
6 | Water | 33 | 292 |
7 | Residential | 127 | 1141 |
8 | Commercial | 124 | 1120 |
9 | Road | 125 | 1127 |
10 | Highway | 123 | 1104 |
11 | Railway | 123 | 1112 |
12 | Parking Lot 1 | 123 | 123 |
13 | Parking Lot 2 | 47 | 422 |
14 | Tennise Court | 43 | 385 |
15 | Running Track | 66 | 594 |
Total | 1502 | 13,527 |
4.2.2. Implementation Details
4.2.3. Comparison with State-of-the-Art Backbone Methods
4.3. Ablation Studies
4.3.1. Ablation Study of the Input Patch Size
4.3.2. Ablation Study of the Kernel Size in the ECA Block
4.3.3. Ablation Study of the Proposed Multi-Feature Hierarchical Module
4.3.4. Ablation Study of the Numbers of the Training Samples
4.4. Classification Results
Class No. | CNNs | Transformers | |||||||
---|---|---|---|---|---|---|---|---|---|
2D-CNN | 3D-CNN | HybridSn | ViT | PiT | HiT | SSFTT | GAHT | AMHFN (Ours) | |
1 | 98.58 | 95.74 | 98.4 | 96.98 | 96.89 | 98.13 | 97.51 | 98.40 | 98.76 |
2 | 99.38 | 98.76 | 98.32 | 98.85 | 96.63 | 97.96 | 99.91 | 98.66 | 99.38 |
3 | 100 | 99.36 | 100 | 98.72 | 98.09 | 99.84 | 99.84 | 99.68 | 100 |
4 | 99.11 | 98.3 | 99.64 | 98.84 | 96.61 | 98.93 | 98.66 | 98.48 | 97.14 |
5 | 99.11 | 97.41 | 99.02 | 96.6 | 90.88 | 97.32 | 99.28 | 98.64 | 98.75 |
6 | 89.73 | 76.37 | 85.27 | 88.7 | 83.9 | 89.73 | 91.78 | 97.67 | 99.32 |
7 | 97.55 | 92.11 | 95 | 96.84 | 89.66 | 96.49 | 96.49 | 97.90 | 98.60 |
8 | 93.21 | 84.46 | 90.98 | 92.86 | 82.77 | 94.82 | 95.45 | 97.53 | 96.12 |
9 | 93.08 | 87.93 | 90.24 | 89.97 | 81.01 | 94.14 | 96.72 | 97.83 | 98.05 |
10 | 99.09 | 92.66 | 94.29 | 93.12 | 68.48 | 95.38 | 99.91 | 99.08 | 99.18 |
11 | 96.4 | 86.42 | 90.11 | 90.56 | 80.13 | 95.68 | 97.66 | 98.39 | 97.21 |
12 | 99.19 | 90.72 | 92.7 | 95.5 | 81.71 | 97.3 | 98.11 | 99.27 | 98.29 |
13 | 93.84 | 78.67 | 99.05 | 65.4 | 44.79 | 85.55 | 98.82 | 96.63 | 99.76 |
14 | 100 | 98.96 | 99.22 | 97.66 | 87.53 | 99.74 | 100 | 99.86 | 100 |
15 | 100 | 98.82 | 99.49 | 99.49 | 86.53 | 100 | 100 | 99.19 | 100 |
(%) | 97.28 | 91.86 | 94.99 | 93.95 | 84.58 | 96.23 | 97.94 | 97.92 | 98.32 |
OA (%) | 97.49 | 92.47 | 95.36 | 94.40 | 85.73 | 96.51 | 98.09 | 98.07 | 98.45 |
AA (%) | 97.22 | 91.78 | 95.45 | 93.34 | 84.37 | 96.07 | 98.01 | 98.01 | 98.71 |
4.5. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hestir, E.L.; Brando, V.E.; Bresciani, M.; Giardino, C.; Matta, E.; Villa, P.; Dekker, A.G. Measuring freshwater aquatic ecosystems: The need for a hyperspectral global mapping satellite mission. Remote Sens. Environ. 2015, 167, 181–195. [Google Scholar] [CrossRef]
- Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted Nonlocal Low-Rank Tensor Decomposition Method for Sparse Unmixing of Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, L.; Tong, Q.; Sun, X. The Spectral Crust project—Research on new mineral exploration technology. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; IEEE: Shanghai, China, 2012; pp. 1–4. [Google Scholar]
- Noor, S.S.M.; Michael, K.; Marshall, S.; Ren, J.; Tschannerl, J.; Kao, F. The properties of the cornea based on hyperspectral imaging: Optical biomedical engineering perspective. In Proceedings of the 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), Bratislava, Slovakia, 23–25 May 2016; IEEE: Bratislava, Slovakia, 2016; pp. 1–4. [Google Scholar]
- Gevaert, C.M.; Suomalainen, J.; Tang, J.; Kooistra, L. Generation of Spectral–Temporal Response Surfaces by Combining Multispectral Satellite and Hyperspectral UAV Imagery for Precision Agriculture Applications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3140–3146. [Google Scholar] [CrossRef]
- Ma, L.; Crawford, M.M.; Tian, J. Local Manifold Learning-Based k-Nearest-Neighbor for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4099–4109. [Google Scholar] [CrossRef]
- Song, W.; Li, S.; Kang, X.; Huang, K. Hyperspectral image classification based on KNN sparse representation. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; IEEE: Beijing, China, 2016; pp. 2411–2414. [Google Scholar]
- Ham, J.; Chen, Y.; Crawford, M.M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
- Pasolli, E.; Melgani, F.; Tuia, D.; Pacifici, F.; Emery, W.J. SVM active learning approach for image classification using spatial information. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2217–2233. [Google Scholar] [CrossRef]
- Li, S.; Jia, X.; Zhang, B. Superpixel-based Markov random field for classification of hyperspectral images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; IEEE: Melbourne, Australia, 2013; pp. 3491–3494. [Google Scholar]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Ran, L.; Zhang, Y.; Wei, W.; Yang, T. Bands sensitive convolutional network for hyperspectral image classification. In Proceedings of the International Conference on Internet Multimedia Computing and Service, Xi’an, China, 19–21 August 2016; pp. 268–272. [Google Scholar]
- Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.Q.; Chan, J.C.W. Learning and transferring deep joint spectral–Spatial features for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; PMLR: San Diego, CA, USA, 2021; pp. 10347–10357. [Google Scholar]
- Jiang, Y.; Chang, S.; Wang, Z. Transgan: Two pure transformers can make one strong gan, and that can scale up. Adv. Neural Inf. Process. Syst. 2021, 34, 14745–14758. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929. [Google Scholar]
- Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; Jégou, H.; Douze, M. Levit: A vision transformer in convnet’s clothing for faster inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Online, 11–17 October 2021; pp. 12259–12269. [Google Scholar]
- He, X.; Chen, Y.; Lin, Z. Spatial-Spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
- Mei, S.; Song, C.; Ma, M.; Xu, F. Hyperspectral image classification using group-aware hierarchical transformer. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Slavkovikj, V.; Verstockt, S.; De Neve, W.; Van Hoecke, S.; Van de Walle, R. Hyperspectral image classification with convolutional neural networks. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1159–1162. [Google Scholar]
- Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2017, 56, 937–949. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Hang, R.; Liu, Q.; Hong, D.; Ghamisi, P. Cascaded recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5384–5394. [Google Scholar] [CrossRef]
- Mei, X.; Pan, E.; Ma, Y.; Dai, X.; Huang, J.; Fan, F.; Du, Q.; Zheng, H.; Ma, J. Spectral-spatial attention networks for hyperspectral image classification. Remote Sens. 2019, 11, 963. [Google Scholar] [CrossRef]
- Li, J.; Zhao, X.; Li, Y.; Du, Q.; Xi, B.; Hu, J. Classification of Hyperspectral Imagery Using a New Fully Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett. 2018, 15, 292–296. [Google Scholar] [CrossRef]
- Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Fully convolutional neural networks for remote sensing image classification. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 5071–5074. [Google Scholar] [CrossRef]
- Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
- Zhan, Y.; Hu, D.; Wang, Y.; Yu, X. Semisupervised hyperspectral image classification based on generative adversarial networks. IEEE Geosci. Remote Sens. Lett. 2017, 15, 212–216. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Fernandez-Beltran, R.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2145–2160. [Google Scholar] [CrossRef]
- Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5966–5978. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 03762. [Google Scholar]
- Yang, X.; Cao, W.; Lu, Y.; Zhou, Y. Hyperspectral image transformer classification networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5528715. [Google Scholar] [CrossRef]
- Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–spatial feature tokenization transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
- Qi, W.; Huang, C.; Wang, Y.; Zhang, X.; Sun, W.; Zhang, L. Global–local 3-D convolutional transformer network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
- Tu, B.; Liao, X.; Li, Q.; Peng, Y.; Plaza, A. Local Semantic Feature Aggregation-Based Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Ouyang, E.; Li, B.; Hu, W.; Zhang, G.; Zhao, L.; Wu, J. When Multigranularity Meets Spatial–Spectral Attention: A Hybrid Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–18. [Google Scholar] [CrossRef]
- Chen, J.; Kao, S.h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.H.; Chan, S.H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
- Liu, B.; Yu, X.; Zhang, P.; Tan, X.; Yu, A.; Xue, Z. A semi-supervised convolutional neural network for hyperspectral image classification. Remote Sens. Lett. 2017, 8, 839–848. [Google Scholar] [CrossRef]
- Sharma, V.; Diba, A.; Tuytelaars, T.; Van Gool, L. Hyperspectral CNN for image classification & band selection, with application to face recognition. In Technical Report KUL/ESAT/PSI/1604; KU Leuven, ESAT: Leuven, Belgium, 2016. [Google Scholar]
- Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; Oh, S.J. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 11936–11945. [Google Scholar]
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Corn | 690 | 33,821 |
2 | Cotton | 167 | 8207 |
3 | Sesame | 61 | 2970 |
4 | Broad-leaf soybean | 1264 | 61,948 |
5 | Narrow-leaf soybean | 83 | 4068 |
6 | Rice | 237 | 11,617 |
7 | Water | 1341 | 65,715 |
8 | Roads and houses | 142 | 6982 |
9 | Mixed weed | 105 | 5124 |
Total | 4090 | 200,452 |
Class No. | Class Name | Training | Testing |
---|---|---|---|
1 | Asphalt | 332 | 6299 |
2 | Meadows | 932 | 17,717 |
3 | Gravel | 105 | 1994 |
4 | Trees | 153 | 2911 |
5 | Painted metal sheets | 67 | 1278 |
6 | Bare Soil | 251 | 4778 |
7 | Bitumen | 67 | 1263 |
8 | Self-Blocking Bricks | 184 | 3498 |
9 | Shadows | 47 | 900 |
Total | 2138 | 40,638 |
Patch Size | 7 × 7 | 9 × 9 | 11 × 11 | 13 × 13 | 15 × 15 |
---|---|---|---|---|---|
WHL | 96.28 | 95.46 | 94.45 | 93.90 | 91.86 |
PU | 97.58 | 96.93 | 96.25 | 95.52 | 94.61 |
H2 | 97.77 | 98.40 | 98.56 | 97.87 | 97.44 |
No. | ECA | ESA | OA | AA | |
---|---|---|---|---|---|
1 | × | × | 97.94 | 98.09 | 98.01 |
2 | ✔ | × | 97.50 | 97.69 | 98.06 |
3 | × | ✔ | 97.98 | 98.13 | 98.29 |
4 | ✔ | ✔ | 98.32 | 98.45 | 98.71 |
Class No. | CNNs | Transformers | |||||||
---|---|---|---|---|---|---|---|---|---|
2D-CNN | 3D-CNN | HybridSn | ViT | PiT | HiT | SSFTT | GAHT | AMHFN (Ours) | |
1 | 91.38 | 94.29 | 92.85 | 89.72 | 89.36 | 89.83 | 95.71 | 95.9 | 95.85 |
2 | 93.12 | 93.54 | 93.44 | 63.64 | 87.43 | 90.17 | 94.20 | 94.82 | 95.31 |
3 | 92.63 | 86.03 | 82.46 | 75.12 | 48.18 | 88.01 | 90.34 | 94.44 | 96.16 |
4 | 93.06 | 94.57 | 93.54 | 90.14 | 89.62 | 90.61 | 96.06 | 95.64 | 96.25 |
5 | 94.91 | 90.68 | 89.65 | 78.96 | 76.77 | 91.62 | 91.27 | 95.75 | 94.47 |
6 | 96.81 | 97.39 | 97.05 | 96.36 | 95.45 | 96.12 | 98.29 | 97.87 | 98.09 |
7 | 97.92 | 98.64 | 98.26 | 97.55 | 97.55 | 97.57 | 99.01 | 99.03 | 99.03 |
8 | 94.86 | 93.93 | 95.46 | 94.11 | 91.79 | 93.34 | 97.41 | 96.89 | 97.15 |
9 | 92.6 | 92.53 | 93.58 | 89.72 | 90.59 | 92.33 | 92.54 | 91.98 | 92.76 |
(%) | 93.10 | 94.40 | 93.50 | 88.93 | 89.20 | 91.20 | 95.82 | 95.86 | 96.17 |
OA (%) | 94.67 | 95.70 | 94.99 | 91.45 | 91.65 | 93.18 | 96.80 | 96.82 | 97.07 |
AA (%) | 94.14 | 93.51 | 92.92 | 86.15 | 85.19 | 92.18 | 94.98 | 95.81 | 96.12 |
Class No. | CNNs | Transformers | |||||||
---|---|---|---|---|---|---|---|---|---|
2D-CNN | 3D-CNN | HybridSn | ViT | PiT | HiT | SSFTT | GAHT | AMHFN (Ours) | |
1 | 92.40 | 83.43 | 85.32 | 85.53 | 85.91 | 82.27 | 87.81 | 94.36 | 93.95 |
2 | 91.58 | 88.69 | 86.23 | 79.99 | 82.95 | 85.65 | 92.33 | 92.82 | 93.07 |
3 | 55.15 | 39.51 | 60.64 | 51.40 | 13.47 | 43.79 | 76.23 | 81.81 | 84.70 |
4 | 96.27 | 93.54 | 95.32 | 86.22 | 63.40 | 87.54 | 93.93 | 95.02 | 96.04 |
5 | 99.70 | 91.22 | 98.27 | 98.57 | 98.57 | 98.80 | 100.00 | 99.55 | 99.70 |
6 | 94.50 | 69.29 | 80.32 | 73.55 | 31.37 | 67.78 | 79.78 | 92.89 | 91.75 |
7 | 69.17 | 59.15 | 60.67 | 51.48 | 20.65 | 60.14 | 72.89 | 77.68 | 78.13 |
8 | 84.44 | 55.67 | 67.22 | 77.56 | 49.19 | 76.84 | 89.16 | 80.05 | 90.29 |
9 | 99.89 | 94.45 | 100.00 | 99.25 | 78.76 | 98.72 | 94.13 | 98.72 | 100.00 |
(%) | 86.62 | 73.81 | 78.07 | 73.12 | 57.75 | 74.35 | 85.39 | 88.83 | 90.20 |
OA (%) | 89.73 | 79.97 | 83.04 | 79.05 | 68.09 | 80.26 | 88.88 | 91.46 | 92.51 |
AA (%) | 87.01 | 74.99 | 81.55 | 78.17 | 58.25 | 77.95 | 87.36 | 90.32 | 91.96 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, X.; Luo, Y.; Zhang, Z.; Tang, D.; Zhou, Z.; Tang, H. AMHFN: Aggregation Multi-Hierarchical Feature Network for Hyperspectral Image Classification. Remote Sens. 2024, 16, 3412. https://doi.org/10.3390/rs16183412
Yang X, Luo Y, Zhang Z, Tang D, Zhou Z, Tang H. AMHFN: Aggregation Multi-Hierarchical Feature Network for Hyperspectral Image Classification. Remote Sensing. 2024; 16(18):3412. https://doi.org/10.3390/rs16183412
Chicago/Turabian StyleYang, Xiaofei, Yuxiong Luo, Zhen Zhang, Dong Tang, Zheng Zhou, and Haojin Tang. 2024. "AMHFN: Aggregation Multi-Hierarchical Feature Network for Hyperspectral Image Classification" Remote Sensing 16, no. 18: 3412. https://doi.org/10.3390/rs16183412