Hyperspectral Image Classification Based on Adaptive Global–Local Feature Fusion
<p>Framework of the AGLFF model.</p> "> Figure 2
<p>Classification maps from the IP dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) ELM. (<b>e</b>) SuperPCA. (<b>f</b>) S3PCA. (<b>g</b>) BLS. (<b>h</b>) SBLS. (<b>i</b>) GCN. (<b>j</b>) GCGCN. (<b>k</b>) NSCKL. (<b>l</b>) XPGN. (<b>m</b>) AGLFF1. (<b>n</b>) AGLFF.</p> "> Figure 3
<p>Classification maps from the KSC dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) ELM. (<b>e</b>) SuperPCA. (<b>f</b>) S3PCA. (<b>g</b>) BLS. (<b>h</b>) SBLS. (<b>i</b>) GCN. (<b>j</b>) GCGCN. (<b>k</b>) NSCKL. (<b>l</b>) XPGN. (<b>m</b>) AGLFF1. (<b>n</b>) AGLFF.</p> "> Figure 4
<p>Classification maps from the PU dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground-truth map. (<b>c</b>) SVM. (<b>d</b>) ELM. (<b>e</b>) SuperPCA. (<b>f</b>) S3PCA. (<b>g</b>) BLS. (<b>h</b>) SBLS. (<b>i</b>) GCN. (<b>j</b>) GCGCN. (<b>k</b>) NSCKL. (<b>l</b>) XPGN. (<b>m</b>) AGLFF1. (<b>n</b>) AGLFF.</p> "> Figure 5
<p>Classification performance of different models with different label ratios. (<b>a</b>) IP. (<b>b</b>) KSC. (<b>c</b>) PU.</p> "> Figure 6
<p>Classification performance of the fused, global, and local features. (<b>a</b>) IP. (<b>b</b>) KSC. (<b>c</b>) PU.</p> "> Figure 7
<p>Classification performance of the AGLFF model with different parameters. (<b>a</b>) OA versus parameters <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>θ</mi> </semantics></math> on IP. (<b>b</b>) OA versus parameters <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>θ</mi> </semantics></math> on KSC. (<b>c</b>) OA versus parameters <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>θ</mi> </semantics></math> on PU. (<b>d</b>) OA versus parameter <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> on three datasets. (<b>e</b>) OA versus parameter <span class="html-italic">C</span> on three datasets. (<b>f</b>) OA versus parameter <span class="html-italic">M</span> on three datasets.</p> "> Figure 8
<p>OA versus <math display="inline"><semantics> <msup> <mrow> <mi>G</mi> </mrow> <mi>E</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mrow> <mi>G</mi> </mrow> <mi>M</mi> </msup> </semantics></math>. (<b>a</b>) IP. (<b>b</b>) KSC. (<b>c</b>) PU.</p> ">
Abstract
:1. Introduction
- The global–local adaptive fusion graph is built to obtain consistent spatial–spectral data. Adaptive fusion can automatically learn the weight parameters of the global high-order and local graphs, which can realize feature smoothing of intra-class data and increase the discriminability of inter-class data.
- The CP structure is used to express the relationship between the fused feature and the categories to better utilize unlabeled data, resulting in improved classification performance.
- Adaptive fusion features are introduced into the BLS model as weights, and the WBLS model is used to expand the broad of the fused features to further enhance the expressiveness of data.
2. Adaptive Global–Local Feature Fusion Method
2.1. Adaptive Feature Fusion
Algorithm 1 Adaptive Feature Fusion Process |
Input: PCA-based HSI representation , pixel spatial coordinate , superpixel spectral feature , and superpixel spatial coordinate .
|
2.2. Class Probability Structure
2.3. Weighted Broad Learning System
Algorithm 2 AGLFF Method |
Input: Adaptive fused data . Output: Predictive labels . |
3. Experiments and Analysis
3.1. HSI Datasets
3.2. Comparative Experiments
- The classification results of AGLFF outperform the other methods because the model achieves a consistent fusion of global and local features. In addition, semi-supervised classification is performed by sufficient unlabeled data, and the fused features are added to the BLS model as weights, making the features smoother and obtaining higher classification accuracy. GCGCN is better than GCN because of the use of an efficient GCN method, which captures rich global spectral–spatial features. SBLS uses more unlabeled information for semi-supervised classification through BLS and obtains relatively good results. The GCN model has the worst classification results because it processes only the spectral information. AGLFF1 has the best classification results except for AGLFF and GCGCN due to the use of global–local fusion features, achieving higher classification accuracy in several classes in all three datasets. Hence, the proposed AGLFF outperforms the other nine methods by using the global–local fusion features and introducing them into the BLS model as weights, with an OA value of 96.11% and a kappa value of 95.23% for the IP dataset.
- The four models—ELM, SVM, BLS, and SuperPCA—consume the shortest time. BLS is the model with the shortest time, except for ELM, SuperPCA, and SVM, mainly because the BLS model is relatively simple and the parameters can be calculated according to the inverse matrix. GCGCN and GCN depth methods consume the longest time. XPGN takes a longer time than AGLFF because it use three branch models to acquire information from various scales, and the training time is longer. NSCKL takes less time than AGLFF due to its relatively simple structure. AGLFF takes neither the most nor the least time because it takes some time to fuse the global–local features and calculate the class probability matrix between the samples. However, AGLFF has the best classification performance and can realize feature smoothing of intra-class data and increase the discriminability of inter-class data.
- The accuracy obtained by all methods on the IP dataset is relatively low because of the small degree of difference classes; for example, corn-mintill and corn-notill are less distinguishable and more difficult to classify. All methods yielded better and less time-consuming results on the KSC dataset because the dataset contains fewer samples of classes and less inter-class similarity, making it easier to distinguish between categories. The AGLFF has the best results on the KSC dataset, with an OA value of 99.26% and a kappa value of 99.09%. For the proposed AGLFF, misclassification appears only in Class 13 (water), and the rest of the classes are classified correctly. This further illustrates the advantages of AGLFF.
3.3. Parameter Analysis
3.3.1. Semi-Supervised Label Ratio
3.3.2. Analysis of Construction Graph
3.3.3. Parameter Settings and Analysis
3.4. Ablation Studies
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
HSI | Hyperspectral Image |
AGLFF | Adaptive Global–Local Feature Fusion |
SVM | Support Vector Machine |
PCA | Principal Component Analysis |
DL | Deep Learning |
SAE | Stacked Autoencoder |
CNN | Convolutional Neural Network |
BLS | Broad Learning System |
MF | Mapped Features |
EN | Enhancement Nodes |
SSL | Semi-supervised Learning |
CP | Class Probability |
WBLS | Weighted Broad Learning System |
SLIC | Simple Linear Iterative Clustering |
IP | Indian Pines |
KSC | Kennedy Space Center |
PU | Pavia University |
AA | Average Accuracy |
OA | Overall Accuracy |
References
- Yu, C.; Zhou, S.; Song, M.; Chang, C. Semisupervised hyperspectral band selection based on dual-constrained low-rank representation. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5503005. [Google Scholar] [CrossRef]
- Cheng, Y.; Chen, Y.; Kong, Y.; Wang, X. Soft instance-level domain adaptation with virtual classifier for unsupervised hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5509013. [Google Scholar] [CrossRef]
- Prades, J.; Safont, G.; Salazar, A.; Vergara, L. Estimation of the Number of Endmembers in Hyperspectral Images Using Agglomerative Clustering. Remote Sens. 2020, 12, 3585. [Google Scholar] [CrossRef]
- Wang, H.; Cheng, Y.; Wang, X. A Novel Hyperspectral Image Classification Method Using Class-Weighted Domain Adaptation Network. Remote Sens. 2023, 15, 999. [Google Scholar] [CrossRef]
- Wang, H.; Wang, X.; Cheng, Y. Graph meta transfer network for heterogeneous few-shot hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 61, 5501112. [Google Scholar] [CrossRef]
- Kong, Y.; Wang, X.; Cheng, Y.; Chen, C.L.P. Multi-stage convolutional broad learning with block diagonal constraint for hyperspectral image classification. Remote Sens. 2021, 13, 3412. [Google Scholar] [CrossRef]
- Ham, J.; Chen, Y.; Crawford, M.; Ghosh, J. Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 492–501. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
- Salazar, A.; Safont, G.; Vergara, L.; Vidal, E. Graph Regularization Methods in Soft Detector Fusion. IEEE Access 2023, 11, 144747–144759. [Google Scholar] [CrossRef]
- Sun, W.; Du, Q. Graph-regularized fast and robust principal component analysis for hyperspectral band selection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3185–3195. [Google Scholar] [CrossRef]
- Villa, A.; Benediktsson, J.; Chanussot, J.; Jutten, C. Hyperspectral image classification with independent component discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4865–4876. [Google Scholar] [CrossRef]
- Shi, C.; Sun, J.; Wang, T.; Wang, L. Hyperspectral Image Classification Based on a 3D Octave Convolution and 3D Multiscale Spatial Attention Network. Remote Sens. 2023, 15, 257. [Google Scholar] [CrossRef]
- Liu, W.; Liu, B.; He, P.; Hu, Q.; Gao, K.; Li, H. Masked Graph Convolutional Network for Small Sample Classification of Hyperspectral Images. Remote Sens. 2023, 15, 1869. [Google Scholar] [CrossRef]
- Pan, C.; Gao, X.; Wang, Y.; Li, J. Markov random fields integrating adaptive interclass-pair penalty and spectral similarity for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2520–2534. [Google Scholar] [CrossRef]
- Ghamisi, P.; Benediktsson, J.; Ulfarsson, M. Spectral–spatial classification of hyperspectral images based on hidden markov random fields. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2565–2574. [Google Scholar] [CrossRef]
- Lu, T.; Li, S.; Fang, L.; Jia, X.; Benediktsson, J. From subpixel to superpixel: A novel fusion framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4398–4411. [Google Scholar] [CrossRef]
- Cai, Y.; Zhang, Z.; Ghamisi, P.; Ding, Y.; Liu, X.; Cai, Z.; Gloaguen, R. Superpixel contracted neighborhood contrastive subspace clustering network for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5530113. [Google Scholar] [CrossRef]
- Wang, K.; Wang, X.; Zhang, T.; Cheng, Y. Few-shot learning with deep balanced network and acceleration strategy. Int. J. Mach. Learn Cybern. 2022, 13, 133–144. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Cai, Y.; Liu, X.; Cai, Z. BS-Nets: An end-to-end framework for band selection of hyperspectral image. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1969–1984. [Google Scholar] [CrossRef]
- Mei, S.; Ji, J.; Hou, J.; Li, X.; Du, Q. Learning sensor-specific spatial-spectral features of hyperspectral images via convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4520–4533. [Google Scholar] [CrossRef]
- Kong, Y.; Wang, X.; Cheng, Y. Spectral–spatial feature extraction for HSI classification based on supervised hypergraph and sample expanded CNN. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 4128–4140. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised deep feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1909–1921. [Google Scholar] [CrossRef]
- Yang, X.; Ye, Y.; Li, X.; Lau, R.Y.; Zhang, X.; Huang, X. Hyperspectral image classification with deep learning models. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5408–5423. [Google Scholar] [CrossRef]
- Mou, L.; Ghamisi, P.; Zhu, X. Unsupervised spectral–spatial feature learning via deep residual conv–deconv network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 391–406. [Google Scholar] [CrossRef]
- Kong, Y.; Wang, X.; Cheng, Y.; Chen, Y.; Chen, C.L.P. Graph domain adversarial network with dual-weighted pseudo-label loss for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6005105. [Google Scholar] [CrossRef]
- Ding, Y.; Pan, S.; Chong, Y. Robust spatial-spectral block-diagonal structure representation with fuzzy class probability for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1747–1762. [Google Scholar] [CrossRef]
- Chen, C.L.P.; Liu, Z. Broad learning system: An effective and efficient incremental learning system without the need for deep architecture. IEEE Trans. Neural Netw. Learn Syst. 2018, 29, 10–24. [Google Scholar] [CrossRef]
- Jin, J.; Chen, C.L.P. Regularized robust broad learning system for uncertain data modeling. Neurocomputing 2018, 322, 58–69. [Google Scholar] [CrossRef]
- Kong, Y.; Cheng, Y.; Chen, C.L.P.; Wang, X. Hyperspectral image clustering based on unsupervised broad learning. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1741–1745. [Google Scholar] [CrossRef]
- Wang, H.; Wang, X.; Chen, C.L.P.; Cheng, Y. Hyperspectral image classification based on domain adaptation broad learning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020, 13, 3006–3018. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Marsheva, T.V.B.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
- Zhang, Y.; Cao, G.; Shafique, A.; Fu, P. Label propagation ensemble for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 3623–3636. [Google Scholar] [CrossRef]
- De Morsier, F.; Borgeaud, M.; Gass, V.; Thiran, J.-P.; Tuia, D. Kernel low-rank and sparse graph for unsupervised and semi-supervised classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3410–3420. [Google Scholar] [CrossRef]
- Ma, J.; Chow, T.W. Robust non-negative sparse graph for semi-supervised multi-label learning with missing labels. Inf. Sci. 2018, 422, 336–351. [Google Scholar] [CrossRef]
- Shao, Y.; Sang, N.; Gao, C.; Ma, L. Spatial and class structure regularized sparse representation graph for semi-supervised hyperspectral image classification. Pattern Recognit. 2018, 81, 81–94. [Google Scholar] [CrossRef]
- Ding, Y.; Guo, Y.; Chong, Y.; Pan, S.; Feng, J. Global consistent graph convolutional network for hyperspectral image classification. IEEE Trans. Instrum. Meas. 2021, 70, 5501516. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Machine Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
- Nie, F.; Wang, X.; Huang, H. Clustering and projected clustering with adaptive neighbors. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), New York, NY, USA, 24–27 August 2014; pp. 977–986. [Google Scholar]
- Lin, Z.; Liu, R.; Su, Z. Linearized alternating direction method with adaptive penalty for low-rank representation. In Proceedings of the Advances in Neural Information Processing Systems, Granada, Spain, 20 September 2011; pp. 612–620. [Google Scholar]
- Wu, Y.; Yang, X.; Plaza, A.; Qiao, F.; Gao, L.; Zhang, B.; Cui, Y. Approximate computing of remotely sensed data: SVM hyperspectral image classification as a case study. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2016, 9, 5806–5818. [Google Scholar] [CrossRef]
- Zhai, H.; Zhang, H.; Zhang, L.; Li, P.; Plaza, A. A new sparse subspace clustering algorithm for hyperspectral remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 43–47. [Google Scholar] [CrossRef]
- Jiang, J.; Ma, J.; Chen, C.; Wang, Z.; Cai, Z.; Wang, L. SuperPCA: A superpixelwise PCA approach for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4581–4593. [Google Scholar] [CrossRef]
- Zhang, X.; Jiang, X.; Jiang, J.; Zhang, Y.; Liu, X.; Cai, Z. Spectral—Spatial and superpixelwise PCA for unsupervised feature extraction of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5502210. [Google Scholar] [CrossRef]
- Kong, Y.; Wang, X.; Cheng, Y.; Chen, C.L.P. Hyperspectral imagery classification based on semi-supervised broad learning system. Remote Sens. 2018, 10, 685. [Google Scholar] [CrossRef]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar] [CrossRef]
- Su, Y.; Gao, L.; Jiang, M.; Plaza, A.; Sun, X.; Zhang, B. NSCKL: Normalized Spectral Clustering With Kernel-Based Learning for Semisupervised Hyperspectral Image Classification. IEEE Trans. Cybern. 2023, 53, 6649–6662. [Google Scholar] [CrossRef]
- Xi, B.; Li, J.; Li, Y.; Song, R.; Xiao, Y.; Du, Q.; Chanussot, J. Semi-supervised Cross-scale Graph Prototypical Network for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 9337–9351. [Google Scholar] [CrossRef]
IP | KSC | PU | |||||||
---|---|---|---|---|---|---|---|---|---|
Class | Surface Object | n.l.s | n.u.s | Surface Object | n.l.s | n.u.s | Surface Object | n.l.s | n.u.s |
1 | Alfalfa | 30 | 16 | Scrub | 30 | 731 | Asphalt | 30 | 6601 |
2 | Corn-notill | 30 | 1398 | Willow swamp | 30 | 213 | Meadows | 30 | 18,619 |
3 | Corn-mintill | 30 | 800 | Cabbage palm hammock | 30 | 226 | Gravel | 30 | 2069 |
4 | Corn | 30 | 207 | Slash pine | 30 | 222 | Trees | 30 | 3034 |
5 | Grass-pasture | 30 | 453 | Oak/broadleaf | 30 | 131 | Painted metal sheets | 30 | 1345 |
6 | Grass-trees | 30 | 700 | Hardwood | 30 | 199 | Bare soil | 30 | 4999 |
7 | Grass-pasture-mowed | 15 | 13 | Swamp | 30 | 75 | Bitumen | 30 | 1300 |
8 | Hay-windrowed | 30 | 448 | Graminoid marsh | 30 | 401 | Self-blocking bricks | 30 | 3652 |
9 | Oats | 15 | 5 | Spartina marsh | 30 | 490 | Shadows | 30 | 917 |
10 | Soybean-notill | 30 | 942 | Cattail marsh | 30 | 374 | |||
11 | Soybean-mintill | 30 | 2425 | Salt marsh | 30 | 389 | |||
12 | Soybean-clean | 30 | 563 | Mud flats | 30 | 473 | |||
13 | Wheat | 30 | 175 | Water | 30 | 897 | |||
14 | Woods | 30 | 1235 | ||||||
15 | Buildings-grass-trees-drives | 30 | 356 | ||||||
16 | Stone-steel-towers | 30 | 63 |
Class | SVM | ELM | SuperPCA | S3PCA | BLS | SBLS | GCN | GCGCN | NSCKL | XPGN | AGLFF1 | AGLFF |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 20.48 | 35.98 | 100 | 100 | 100 | 100 | 95.00 | 100 | 90.39 | 98.13 | 98.96 | 96.88 |
2 | 55.49 | 63.72 | 92.65 | 91.82 | 64.89 | 92.17 | 56.71 | 91.15 | 94.69 | 88.80 | 90.53 | 90.79 |
3 | 42.77 | 43.67 | 96.28 | 93.52 | 55.76 | 94.88 | 51.50 | 92.53 | 89.38 | 96.11 | 96.08 | 96.21 |
4 | 36.5 | 36.01 | 88.41 | 96.20 | 51.82 | 99.90 | 84.64 | 99.95 | 87.05 | 99.47 | 99.92 | 99.89 |
5 | 78.45 | 82.96 | 95.14 | 96.03 | 86.36 | 94.08 | 83.71 | 97.04 | 93.71 | 97.09 | 96.91 | 99.40 |
6 | 91.33 | 94.22 | 97.14 | 97.14 | 88.87 | 99.80 | 94.03 | 96.79 | 98.19 | 99.57 | 99.36 | 99.74 |
7 | 39.80 | 36.61 | 92.86 | 92.86 | 83.16 | 98.57 | 92.31 | 96.15 | 99.87 | 99.16 | 98.81 | 99.92 |
8 | 98.93 | 99.29 | 99.55 | 100 | 90.87 | 99.82 | 96.61 | 100 | 100 | 100 | 100 | 100 |
9 | 30.76 | 29.28 | 100 | 100 | 100 | 100 | 100 | 100 | 73.68 | 99.71 | 63.33 | 100 |
10 | 47.49 | 57.41 | 89.52 | 90.70 | 63.64 | 86.88 | 77.47 | 91.51 | 95.58 | 95.15 | 95.56 | 91.43 |
11 | 71.92 | 74.56 | 93.73 | 95.31 | 81.70 | 89.69 | 56.56 | 95.38 | 93.24 | 88.67 | 90.20 | 94.68 |
12 | 53.36 | 52.18 | 96.67 | 97.34 | 69.82 | 97.80 | 58.29 | 95.81 | 95.92 | 96.10 | 97.13 | 98.94 |
13 | 89.38 | 92.81 | 99.43 | 99.43 | 90.59 | 99.43 | 100 | 99.31 | 98.02 | 99.39 | 99.62 | 97.93 |
14 | 93.40 | 93.97 | 90.20 | 91.55 | 90.87 | 97.23 | 80.03 | 99.26 | 99.91 | 99.44 | 99.77 | 99.10 |
15 | 50.86 | 57.67 | 98.53 | 98.60 | 80.16 | 99.44 | 69.55 | 97.50 | 90.65 | 98.48 | 99.91 | 93.35 |
16 | 85.47 | 89.58 | 97.82 | 98.41 | 91.88 | 99.05 | 98.41 | 99.52 | 92.30 | 98.17 | 98.94 | 99.38 |
OA (%) | 64.35 | 68.37 | 94.61 | 95.79 | 81.36 | 93.95 | 69.24 | 95.35 | 94.74 | 94.83 | 95.03 | 96.11 |
AA (%) | 61.65 | 64.99 | 95.49 | 96.18 | 80.02 | 96.80 | 65.27 | 96.80 | 93.29 | 97.09 | 95.31 | 97.35 |
Kappa (%) | 59.92 | 64.40 | 92.99 | 93.67 | 80.32 | 92.29 | 80.39 | 94.67 | 93.99 | 94.10 | 94.77 | 95.23 |
T (s) | 3.76 | 1.51 | 1.95 | 3.98 | 3.17 | 528.75 | 580.00 | 641.00 | 47.97 | 399.54 | 367.83 | 392.57 |
Class | SVM | ELM | SuperPCA | S3PCA | BLS | SBLS | GCN | GCGCN | NSCKL | XPGN | AGLFF1 | AGLFF |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 93.72 | 83.35 | 96.72 | 96.72 | 88.60 | 98.80 | 80.15 | 97.84 | 98.98 | 98.87 | 97.78 | 99.40 |
2 | 76.49 | 77.84 | 98.71 | 99.53 | 76.74 | 96.06 | 80.02 | 95.85 | 95.59 | 98.12 | 96.62 | 99.48 |
3 | 73.15 | 61.93 | 97.23 | 98.23 | 70.36 | 96.19 | 76.66 | 97.39 | 99.91 | 100 | 99.85 | 100 |
4 | 45.16 | 68.55 | 92.17 | 97.75 | 75.61 | 80.23 | 27.63 | 99.64 | 87.08 | 84.98 | 99.73 | 99.24 |
5 | 60.70 | 71.25 | 96.18 | 97.71 | 69.06 | 99.08 | 72.56 | 98.93 | 88.26 | 98.82 | 99.24 | 98.31 |
6 | 48.55 | 82.48 | 98.05 | 97.49 | 74.12 | 75.34 | 80.51 | 100 | 99.17 | 98.06 | 100 | 100 |
7 | 68.69 | 61.33 | 100 | 100 | 72.22 | 79.33 | 95.60 | 100 | 100 | 99.89 | 99.91 | 99.28 |
8 | 64.89 | 74.69 | 97.72 | 100 | 76.51 | 98.35 | 85.02 | 100 | 99.85 | 99.76 | 100 | 100 |
9 | 80.65 | 92.59 | 96.73 | 95.51 | 91.87 | 96.29 | 84.59 | 100 | 100 | 100 | 99.82 | 99.45 |
10 | 98.8 | 98.19 | 92.51 | 89.30 | 100 | 97.33 | 91.64 | 99.48 | 99.09 | 98.11 | 100 | 100 |
11 | 91.85 | 96.99 | 99.49 | 99.49 | 99.15 | 96.50 | 89.18 | 100 | 99.71 | 91.28 | 99.82 | 100 |
12 | 79.96 | 89.06 | 99.10 | 100 | 92.38 | 92.30 | 76.20 | 96.89 | 96.06 | 97.65 | 97.72 | 99.65 |
13 | 99.50 | 98.44 | 96.07 | 96.66 | 100 | 99.02 | 99.49 | 100 | 100 | 99.91 | 94.74 | 94.81 |
OA (%) | 82.19 | 85.68 | 97.09 | 97.82 | 88.38 | 93.62 | 83.72 | 99.18 | 98.29 | 96.97 | 98.28 | 99.26 |
AA (%) | 75.55 | 81.28 | 96.97 | 97.57 | 83.59 | 92.24 | 79.94 | 99.09 | 97.21 | 97.34 | 98.86 | 99.53 |
Kappa (%) | 79.68 | 84.11 | 96.43 | 96.85 | 87.01 | 92.68 | 81.89 | 98.99 | 98.09 | 96.16 | 98.08 | 99.09 |
T (s) | 3.35 | 1.39 | 1.51 | 14.55 | 2.03 | 243.17 | 289.28 | 356.25 | 234.17 | 254.39 | 173.13 | 186.29 |
Class | SVM | ELM | SuperPCA | S3PCA | BLS | SBLS | GCN | GCGCN | NSCKL | XPGN | AGLFF1 | AGLFF |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 91.71 | 95.36 | 81.03 | 94.97 | 97.82 | 86.53 | 69.78 | 94.59 | 97.11 | 95.89 | 91.76 | 96.94 |
2 | 91.15 | 94.16 | 86.27 | 90.02 | 97.29 | 97.53 | 54.10 | 98.11 | 99.90 | 99.82 | 98.91 | 99.11 |
3 | 60.59 | 59.05 | 94.10 | 99.14 | 60.98 | 98.44 | 69.69 | 99.35 | 87.69 | 89.13 | 99.12 | 98.55 |
4 | 74.12 | 75.01 | 78.83 | 95.00 | 87.24 | 87.32 | 91.23 | 96.11 | 94.35 | 97.37 | 84.1 | 94.52 |
5 | 95.67 | 99.12 | 97.11 | 99.27 | 100 | 99.87 | 98.74 | 99.77 | 100 | 100 | 99.84 | 99.82 |
6 | 60.51 | 60.56 | 94.62 | 98.67 | 71.02 | 99.37 | 65.34 | 99.69 | 98.66 | 99.16 | 99.29 | 99.56 |
7 | 54.89 | 54.57 | 96.79 | 98.53 | 57.91 | 99.97 | 86.64 | 99.53 | 98.61 | 99.19 | 99.87 | 99.89 |
8 | 80.07 | 72.14 | 92.89 | 95.68 | 80.42 | 94.12 | 72.26 | 97.84 | 96.86 | 98.71 | 98.93 | 97.94 |
9 | 99.91 | 99.98 | 98.32 | 99.13 | 100 | 90.59 | 99.93 | 96.80 | 90.29 | 99.29 | 94.59 | 96.61 |
OA (%) | 80.04 | 80.98 | 91.00 | 96.24 | 87.19 | 95.09 | 66.19 | 97.71 | 97.15 | 97.09 | 96.76 | 98.01 |
AA (%) | 78.74 | 78.88 | 91.11 | 96.71 | 83.63 | 94.86 | 58.39 | 97.98 | 95.94 | 97.62 | 96.27 | 98.11 |
Kappa (%) | 75.19 | 75.62 | 84.14 | 91.98 | 83.19 | 93.47 | 78.63 | 96.98 | 96.21 | 96.53 | 95.69 | 97.35 |
T (s) | 3.36 | 1.99 | 3.84 | 123.64 | 5.76 | 1121.99 | 1783.00 | 1653.00 | 756.82 | 1057.19 | 897.01 | 932.35 |
Algorithm | AGLFF-A | AGLFF-B | AGLFF-C | AGLFF |
---|---|---|---|---|
LFs | ✓ | |||
GFs | ✓ | |||
FFs | ✓ | ✓ | ||
WBLS | ✓ | ✓ | ✓ |
Dataset | AGLFF-A | AGLFF-B | AGLFF-C | AGLFF |
---|---|---|---|---|
IP | 92.87 | 94.76 | 95.03 | 96.11 |
KSC | 96.05 | 98.09 | 98.28 | 99.26 |
PU | 94.07 | 96.19 | 96.76 | 98.01 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, C.; Kong, Y.; Wang, X.; Cheng, Y. Hyperspectral Image Classification Based on Adaptive Global–Local Feature Fusion. Remote Sens. 2024, 16, 1918. https://doi.org/10.3390/rs16111918
Yang C, Kong Y, Wang X, Cheng Y. Hyperspectral Image Classification Based on Adaptive Global–Local Feature Fusion. Remote Sensing. 2024; 16(11):1918. https://doi.org/10.3390/rs16111918
Chicago/Turabian StyleYang, Chunlan, Yi Kong, Xuesong Wang, and Yuhu Cheng. 2024. "Hyperspectral Image Classification Based on Adaptive Global–Local Feature Fusion" Remote Sensing 16, no. 11: 1918. https://doi.org/10.3390/rs16111918