Hierarchical Prototype-Aligned Graph Neural Network for Cross-Scene Hyperspectral Image Classification
"> Figure 1
<p>Illustration of joint distribution alignment and topological structure alignment for cross-scene learning.</p> "> Figure 2
<p>Calculation flowchart of the GOT distance.</p> "> Figure 3
<p>Flowchart of the proposed HPGA.</p> "> Figure 4
<p>Pseudocolor image and ground truth map of Houston. (<b>a</b>) Pseudocolor image of Houston 2013. (<b>b</b>) Ground truth map of Houston 2013. (<b>c</b>) Pseudocolor image of Houston 2018. (<b>d</b>) Ground truth map of Houston 2018.</p> "> Figure 5
<p>Pseudocolor image and ground truth map of HyRANK. (<b>a</b>) Pseudocolor image of Dioni. (<b>b</b>) Ground truth map of Dioni. (<b>c</b>) Pseudocolor image of Loukia. (<b>d</b>) Ground truth map of Loukia.</p> "> Figure 6
<p>Classification result maps for the target scene Houston2018. (<b>a</b>) False color image. (<b>b</b>) Ground truth. (<b>c</b>) DAN. (<b>d</b>) DAAN. (<b>e</b>) MRAN. (<b>f</b>) DSAN. (<b>g</b>) HTCNN. (<b>h</b>) BCAN. (<b>i</b>) HPGA.</p> "> Figure 7
<p>Classification result maps for the target scene HyRANK. (<b>a</b>) False color image. (<b>b</b>) Ground truth. (<b>c</b>) DAN. (<b>d</b>) DAAN. (<b>e</b>) MRAN. (<b>f</b>) DSAN. (<b>g</b>) HTCNN. (<b>h</b>) BCAN. (<b>i</b>) HPGA.</p> "> Figure 8
<p>Comparison of 2D visualization of domain adaptive pre- and post-features on the Houston dataset.</p> "> Figure 9
<p>The impact of parameters <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> on classification results.</p> ">
Abstract
:1. Introduction
- A generic, end-to-end differentiable model framework for conducting domain alignment on prototype graph structure data derived from both source and target domains in a hierarchical manner is proposed. By aligning topological and semantic information at different scales, the accuracy and generalization capability of hyperspectral image classification are improved.
- Different scales of prototype graph structure data are obtained using differentiable graph pooling. This allows the model to analyze data at multiple levels, capturing richer semantic information at different hierarchies.
- The problem of the cross-domain alignment of graph structures is transformed into a graph-matching problem. During network optimization, the graph optimal transport (GOT) distance is minimized in order to align the source and target domains. This approach leverages graph structure information to better address the spectral shift problem.
- Experimental results demonstrate that the proposed hyperspectral cross-scene classification method based on hierarchical prototype graph alignment achieves excellent performance on several datasets. This indicates that the approach has strong generalization capabilities when dealing with hyperspectral cross-scene classification tasks.
2. Methodology
2.1. Preliminaries
2.1.1. Domain Adaptation
2.1.2. Graph Matching
2.2. Overall Framework
2.3. Hierarchical Prototype Representation
2.4. Domain Alignment
2.5. Loss Function
3. Experiments
3.1. Dataset Description
3.1.1. Houston
3.1.2. HyRANK
3.2. Experimental Settings
3.3. Experimental Results
4. Discussion
4.1. Ablation Experiments
4.2. Parameter Sensitivity Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, K.; Zhang, G.; Li, X.; Xie, J. Face recognition based on improved Retinex and sparse representation. Procedia Eng. 2011, 15, 2010–2014. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
- Li, W.; Zhang, Y.; Liu, N.; Du, Q.; Tao, R. Structure-aware collaborative representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7246–7261. [Google Scholar] [CrossRef]
- Wang, R.; Chen, H.; Lu, Y.; Zhang, Q.; Nie, F.; Li, X. Discrete and Balanced Spectral Clustering with Scalability. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 14321–14336. [Google Scholar] [CrossRef]
- Zhang, M.; Li, W.; Du, Q. Diverse region-based CNN for hyperspectral image classification. IEEE Trans. Image Process. 2018, 27, 2623–2634. [Google Scholar] [CrossRef]
- Zhang, M.; Li, W.; Du, Q.; Gao, L.; Zhang, B. Feature extraction for classification of hyperspectral and LiDAR data using patch-to-patch CNN. IEEE Trans. Cybern. 2018, 50, 100–111. [Google Scholar] [CrossRef]
- Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2017, 56, 937–949. [Google Scholar] [CrossRef]
- Zhao, X.; Tao, R.; Li, W.; Li, H.C.; Du, Q.; Liao, W.; Philips, W. Joint classification of hyperspectral and LiDAR data using hierarchical random walk and deep CNN architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370. [Google Scholar] [CrossRef]
- Zhao, Q.; Wang, X.; Wang, B.; Wang, L.; Liu, W.; Li, S. A Dual-Attention Deep Discriminative Domain Generalization Model for Hyperspectral Image Classification. Remote Sens. 2023, 15, 5492. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, W.; Zhang, M.; Qu, Y.; Tao, R.; Qi, H. Topological Structure and Semantic Information Transfer Network for Cross-Scene Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 2817–2830. [Google Scholar] [CrossRef]
- Bruzzone, L.; Prieto, D.F. Unsupervised retraining of a maximum likelihood classifier for the analysis of multitemporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 456–460. [Google Scholar] [CrossRef]
- Sun, H.; Liu, S.; Zhou, S.; Zou, H. Unsupervised cross-view semantic transfer for remote sensing image classification. IEEE Geosci. Remote Sens. Lett. 2015, 13, 13–17. [Google Scholar] [CrossRef]
- Qin, Y.; Bruzzone, L.; Li, B. Tensor alignment based domain adaptation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9290–9307. [Google Scholar] [CrossRef]
- Yang, W.; Peng, J.; Sun, W. Ideal regularized discriminative multiple kernel subspace alignment for domain adaptation in hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5833–5846. [Google Scholar] [CrossRef]
- Zhu, Y.; Zhuang, F.; Wang, J.; Ke, G.; Chen, J.; Bian, J.; Xiong, H.; He, Q. Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1713–1722. [Google Scholar] [CrossRef]
- Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; PMLR: London, UK, 2015; pp. 97–105. [Google Scholar]
- Qu, Y.; Baghbaderani, R.K.; Li, W.; Gao, L.; Zhang, Y.; Qi, H. Physically constrained transfer learning through shared abundance space for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10455–10472. [Google Scholar] [CrossRef]
- Ning, Y.; Peng, J.; Liu, Q.; Huang, Y.; Sun, W.; Du, Q. Contrastive learning based on category matching for domain adaptation in hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5301814. [Google Scholar] [CrossRef]
- Wang, H.; Cheng, Y.; Liu, X.; Kong, Y. Bi-classifier adversarial network for cross-scene hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5504005. [Google Scholar] [CrossRef]
- Liu, F.; Gao, W.; Liu, J.; Tang, X.; Xiao, L. Adversarial Domain Alignment with Contrastive Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5525720. [Google Scholar] [CrossRef]
- Li, Z.; Tang, X.; Li, W.; Wang, C.; Liu, C.; He, J. A two-stage deep domain adaptation method for hyperspectral image classification. Remote Sens. 2020, 12, 1054. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, W.; Sun, W.; Tao, R.; Du, Q. Single-source domain expansion network for cross-scene hyperspectral image classification. IEEE Trans. Image Process. 2023, 32, 1498–1512. [Google Scholar] [CrossRef]
- Kang, G.; Jiang, L.; Yang, Y.; Hauptmann, A.G. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4893–4902. [Google Scholar]
- Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006, 19, 513–520. [Google Scholar]
- Chen, L.; Gan, Z.; Cheng, Y.; Li, L.; Carin, L.; Liu, J. Graph optimal transport for cross-domain alignment. In Proceedings of the International Conference on Machine Learning, Virtual, 12–18 July 2020; PMLR: London, UK, 2020; pp. 1542–1553. [Google Scholar]
- Peyré, G.; Cuturi, M. Computational optimal transport: With applications to data science. Found. Trends Mach. Learn. 2019, 11, 355–607. [Google Scholar] [CrossRef]
- Peyré, G.; Cuturi, M.; Solomon, J. Gromov-wasserstein averaging of kernel and distance matrices. In Proceedings of the International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; PMLR: London, UK, 2016; pp. 2664–2672. [Google Scholar]
- Wang, Z.; Luo, Y.; Huang, Z.; Baktashmotlagh, M. Prototype-matching graph network for heterogeneous domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 2104–2112. [Google Scholar]
- Xu, M.; Wang, H.; Ni, B.; Tian, Q.; Zhang, W. Cross-domain detection via graph-induced prototype alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12355–12364. [Google Scholar]
- Snell, J.; Swersky, K.; Zemel, R. Prototypical networks for few-shot learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4080–4090. [Google Scholar]
- Gretton, A.; Borgwardt, K.M.; Rasch, M.J.; Schölkopf, B.; Smola, A. A kernel two-sample test. J. Mach. Learn. Res. 2012, 13, 723–773. [Google Scholar]
- Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
- Yu, C.; Wang, J.; Chen, Y.; Huang, M. Transfer learning with dynamic adversarial adaptation network. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 778–786. [Google Scholar]
- Zhu, Y.; Zhuang, F.; Wang, J.; Chen, J.; Shi, Z.; Wu, W.; He, Q. Multi-representation adaptation network for cross-domain image classification. Neural Netw. 2019, 119, 214–221. [Google Scholar] [CrossRef]
- He, X.; Chen, Y.; Ghamisi, P. Heterogeneous transfer learning for hyperspectral image classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3246–3263. [Google Scholar] [CrossRef]
Class | Class Name | Number of Classes | |
---|---|---|---|
Houston 2013 (Source) | Houston 2018 (Target) | ||
1 | Grass healthy | 345 | 1353 |
2 | Grass stressed | 365 | 4888 |
3 | Trees | 365 | 2766 |
4 | Water | 285 | 22 |
5 | Residential buildings | 319 | 5347 |
6 | Non-residential buildings | 408 | 32,459 |
7 | Road | 443 | 6365 |
Total | 2530 | 53,200 |
Class | Class Name | Number of Classes | |
---|---|---|---|
Dioni (Source) | Loukia (Target) | ||
1 | Dense Urban Fabric | 1262 | 206 |
2 | Mineral Extraction Sites | 204 | 54 |
3 | Non-Irrigated Arable Land | 614 | 426 |
4 | Fruit Trees | 150 | 79 |
5 | Olive Groves | 1768 | 1107 |
6 | Coniferous Forest | 361 | 422 |
7 | Dense Sderophyllous Vegetation | 5035 | 2996 |
8 | Sparse Sderophyllous Vegetation | 6374 | 2361 |
9 | Sparsely Vegetated Areas | 1754 | 399 |
10 | Rocks and Sand | 492 | 453 |
11 | Water | 1612 | 1393 |
12 | Coastal Water | 398 | 421 |
Total | 20,024 | 10,317 |
Class | DAN | DAAN | MRAN | DSAN | HTCNN | BCAN | HPGA |
---|---|---|---|---|---|---|---|
1 | 55.95 ± 4.15 | 61.57 ± 3.42 | 56.39 ± 5.24 | 57.95 ± 6.22 | 4.85 ± 3.78 | 69.56 ± 7.56 | 45.08 ± 3.72 |
2 | 72.18 ± 6.85 | 76.94 ± 2.89 | 75.57 ± 6.48 | 67.90 ± 9.13 | 71.57 ± 7.35 | 82.82 ± 3.27 | 89.13 ± 1.46 |
3 | 62.87 ± 4.93 | 66.67 ± 3.46 | 68.00 ± 4.58 | 71.69 ± 5.77 | 35.75 ± 5.34 | 57.40 ± 8.28 | 72.60 ± 4.74 |
4 | 100.00 ± 0.00 | 72.73 ± 2.78 | 63.64 ± 4.81 | 81.82 ± 3.67 | 54.64 ± 6.98 | 63.18 ± 5.27 | 100.00 ± 0.00 |
5 | 56.33 ± 4.78 | 52.76 ± 7.44 | 66.50 ± 3.65 | 61.79 ± 8.23 | 54.40 ± 5.76 | 80.49 ± 3.66 | 80.14 ± 7.55 |
6 | 74.11 ± 3.57 | 69.64 ± 4.88 | 68.54 ± 2.71 | 70.26 ± 11.85 | 90.80 ± 4.98 | 76.07 ± 3.90 | 72.82 ± 4.71 |
7 | 31.80 ± 7.33 | 54.23 ± 8.23 | 54.36 ± 4.76 | 54.53 ± 7.54 | 44.05 ± 9.77 | 53.14 ± 3.19 | 55.48 ± 6.19 |
OA (%) | 66.05 ± 5.87 | 66.59 ± 6.72 | 66.95 ± 4.98 | 67.08 ± 2.48 | 74.72 ± 4.35 | 60.81 ± 6.99 | 72.27 ± 5.28 |
AA (%) | 64.74 ± 3.67 | 66.77 ± 4.87 | 64.71 ± 3.85 | 66.56 ± 3.62 | 50.72 ± 5.15 | 73.25 ± 5.84 | 78.36 ± 5.18 |
Kappa (%) | 47.82 ± 3.72 | 50.83 ± 6.77 | 51.51 ± 4.71 | 52.05 ± 1.20 | 55.24 ± 4.11 | 58.16 ± 6.16 | 58.29 ± 1.76 |
Class | DAN | DAAN | MRAN | DSAN | HTCNN | BCAN | HPGA |
---|---|---|---|---|---|---|---|
1 | 10.68 ± 3.58 | 18.45 ± 6.41 | 11.65 ± 7.23 | 26.21 ± 4.37 | 5.34 ± 1.89 | 12.14 ± 8.32 | 10.22 ± 5.69 |
2 | 40.74 ± 7.12 | 14.81 ± 3.94 | 0.00 ± 0.00 | 18.52 ± 9.01 | 0.00 ± 0.00 | 31.85 ± 2.18 | 12.02 ± 3.57 |
3 | 23.47 ± 4.91 | 7.51 ± 1.47 | 3.52 ± 9.27 | 23.71 ± 6.81 | 45.07 ± 5.63 | 8.54 ± 2.89 | 69.48 ± 1.54 |
4 | 3.80 ± 2.22 | 10.13 ± 9.34 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.00 ± 0.00 | 7.09 ± 1.93 | 0.00 ± 0.00 |
5 | 17.71 ± 9.84 | 10.12 ± 3.45 | 29.09 ± 8.57 | 48.69 ± 2.67 | 18.61 ± 4.55 | 17.52 ± 3.78 | 20.23 ± 5.41 |
6 | 4.98 ± 3.29 | 5.21 ± 2.43 | 40.52 ± 1.12 | 45.97 ± 6.54 | 2.61 ± 7.92 | 20.62 ± 4.78 | 13.88 ± 9.87 |
7 | 62.05 ± 4.67 | 81.91 ± 3.23 | 64.12 ± 6.98 | 60.58 ± 5.33 | 77.77 ± 8.45 | 85.34 ± 7.56 | 77.24 ± 2.88 |
8 | 65.14 ± 8.69 | 72.91 ± 5.43 | 57.65 ± 3.97 | 67.26 ± 9.15 | 62.22 ± 4.36 | 71.01 ± 7.83 | 69.72 ± 1.62 |
9 | 63.16 ± 7.19 | 7.02 ± 2.11 | 71.18 ± 9.45 | 37.09 ± 6.54 | 6.27 ± 8.22 | 57.84 ± 3.48 | 7.52 ± 5.17 |
10 | 0.00 ± 0.00 | 0.22 ± 0.15 | 23.18 ± 8.44 | 4.86 ± 1.75 | 0.00 ± 0.00 | 0.00 ± 0.00 | 58.28 ± 9.34 |
11 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 |
12 | 95.96 ± 8.73 | 100 ± 0.00 | 100 ± 0.00 | 100 ± 0.00 | 97.86 ± 3.24 | 97.96 ± 7.89 | 57.72 ± 4.53 |
OA (%) | 56.31 ± 5.87 | 60.47 ± 3.21 | 58.32 ± 9.48 | 60.92 ± 7.63 | 58.63 ± 1.22 | 42.49 ± 2.91 | 62.16 ± 8.36 |
AA (%) | 40.64 ± 2.34 | 35.69 ± 9.12 | 41.74 ± 3.87 | 44.41 ± 6.58 | 34.65 ± 5.14 | 64.31 ± 1.82 | 44.84 ± 2.47 |
Kapp (%) | 45.94 ± 8.54 | 49.71 ± 3.98 | 49.14 ± 7.12 | 52.44 ± 1.35 | 47.47 ± 6.47 | 54.99 ± 2.44 | 53.03 ± 4.78 |
Module | Dataset | ||||||||
---|---|---|---|---|---|---|---|---|---|
Houston | HyRANK | ||||||||
OA (%) | AA (%) | Kappa (%) | OA (%) | AA (%) | Kappa (%) | ||||
✓ | - | - | - | 61.22 ± 3.45 | 64.29 ± 2.31 | 50.85 ± 4.67 | 45.80 ± 5.24 | 32.16 ± 1.78 | 40.23 ± 2.59 |
✓ | ✓ | - | - | 68.42 ± 2.54 | 70.17 ± 4.13 | 54.76 ± 3.11 | 56.91 ± 3.98 | 37.22 ± 1.67 | 46.34 ± 2.46 |
✓ | ✓ | ✓ | - | 69.25 ± 5.47 | 75.84 ± 2.36 | 56.89 ± 4.89 | 61.19 ± 1.24 | 41.75 ± 3.21 | 52.19 ± 2.88 |
✓ | ✓ | ✓ | ✓ | 72.27 ± 5.28 | 78.36 ± 5.18 | 58.29 ± 1.76 | 62.16 ± 3.72 | 44.84 ± 2.47 | 53.03 ± 4.78 |
OA (%) | |||||
---|---|---|---|---|---|
0.001 | 0.01 | 0.1 | 1 | 10 | |
Houston 2018 | 68.85 ± 2.34 | 71.15 ± 3.45 | 71.06 ± 1.78 | 67.35 ± 4.56 | 67.04 ± 2.67 |
Louki | 60.17 ± 3.12 | 59.82 ± 1.54 | 60.84 ± 2.67 | 59.77 ± 4.89 | 60.04 ± 2.45 |
OA (%) | L | |||
---|---|---|---|---|
1 | 2 | 3 | 4 | |
Houston 2018 | 69.74 ± 3.14 | 72.03 ± 4.23 | 72.27 ± 2.67 | 68.64 ± 3.21 |
Louki | 57.26 ± 4.12 | 59.97 ± 2.45 | 62.16 ± 3.36 | 59.77 ± 1.78 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shen, D.; Hu, H.; He, F.; Zhang, F.; Zhao, J.; Shen, X. Hierarchical Prototype-Aligned Graph Neural Network for Cross-Scene Hyperspectral Image Classification. Remote Sens. 2024, 16, 2464. https://doi.org/10.3390/rs16132464
Shen D, Hu H, He F, Zhang F, Zhao J, Shen X. Hierarchical Prototype-Aligned Graph Neural Network for Cross-Scene Hyperspectral Image Classification. Remote Sensing. 2024; 16(13):2464. https://doi.org/10.3390/rs16132464
Chicago/Turabian StyleShen, Danyao, Haojie Hu, Fang He, Fenggan Zhang, Jianwei Zhao, and Xiaowei Shen. 2024. "Hierarchical Prototype-Aligned Graph Neural Network for Cross-Scene Hyperspectral Image Classification" Remote Sensing 16, no. 13: 2464. https://doi.org/10.3390/rs16132464