Residual Group Channel and Space Attention Network for Hyperspectral Image Classification
"> Figure 1
<p>A block of ResNet (Left) and ResNeXt with cardinality = 8 (Right). A layer is shown as (# in channels, filter size, # out channels).</p> "> Figure 2
<p>The structure of the SE block inserted into the ResNet.</p> "> Figure 3
<p>The basic structure of the proposed residual attention module.</p> "> Figure 4
<p>The structure of the residual group channel-wise attention (RGCA) module.</p> "> Figure 5
<p>The structure of the residual channel-wise attention (RCA) module.</p> "> Figure 6
<p>The structure of the residual group channel-wise attention (RGCA) module before simplification (<b>left</b>) and after simplification (<b>right</b>).</p> "> Figure 7
<p>The structure of the residual spatial-wise attention (RSA) module.</p> "> Figure 8
<p>Overall HSI classification structure of the proposed residual group channel and space attention (RGCSA) network.</p> "> Figure 9
<p>General structure of the building block in RGCSA (taking block 1 as an example).</p> "> Figure 10
<p>Classification results of the models in comparison with the IN dataset. (<b>a</b>) False color image. (<b>b</b>) Ground-truth labels, (<b>c</b>–<b>g</b>) Classification results of SSRN, 3-D-ResNeXt, DBMA, SSAN, and RGCSA.</p> "> Figure 11
<p>Classification results of the models in comparison with the UP dataset. (<b>a</b>) False color image. (<b>b</b>) Ground-truth labels, (<b>c</b>–<b>g</b>) Classification results of SSRN, 3D-ResNeXt, DBMA, SSAN, and RGCSA.</p> "> Figure 12
<p>Classification results of the models in comparison with the KSC dataset. (<b>a</b>) False color image. (<b>b</b>) Ground-truth labels, (<b>c</b>–<b>g</b>) Classification results of SSRN, 3D-ResNeXt, DBMA, SSAN, and RGCSA.</p> "> Figure 13
<p>OAs of four different attention mechanisms on the three HSI datasets.</p> "> Figure 14
<p>OAs of the SSRN, 3-D-ResNeXt, and RGCSA with different ratios of training samples for the IN dataset.</p> "> Figure 15
<p>OAs of the SSRN, 3-D-ResNeXt, and RGCSA with different ratios of training samples for the UP dataset.</p> "> Figure 16
<p>OAs of the SSRN, 3-D-ResNeXt, and RGCSA with different ratios of training samples for the KSC dataset.</p> ">
Abstract
:1. Introduction
- Combining the bottom-up top-down attention structure with the residual connection, we constructed residual channel and space attention modules without any additional manual design, and proposed a 3-DCNN-based residual group channel and space attention network (RGCSA) for HSI classification. On the one hand, residual connection can accelerate the flow of information, making the network better training. On the other hand, the structure of channel-wise attention first and then spatial-wise attention could strengthen important information and weaken unimportant information during the training process, and compared to the previous methods, RGCSA can achieve a higher HSI classification accuracy with fewer training samples.
- We applied the principle of group convolution to the channel attention structure to construct a residual group channel attention module, which aims to emphasize each piece of useful channel information. Compared with the previous channel attention methods, it can reduce the possibility of losing useful channel information during attention optimization.
- We proposed a novel spatial-wise attention module, which utilized transposed convolution as an up-sampling method. It ensures the mapping relationship of spatial pixels in the attention optimization process, and makes full use of context information to optimize the features in the spatial dimension to focus on the most informative areas.
2. Related Work
2.1. Pixel-Level HSI Classification Framework
2.2. Three-Dimensional ResNeXt Network
2.3. Squeeze-and-Excitation Network
2.4. Proposed Attention Mechanism
2.4.1. Residual Group Channel-Wise Attention Module
2.4.2. Residual Spatial-Wise Attention Module
3. Network Configuration and Experimental Setup
3.1. Overall Framework of the Proposed Network
3.2. Network Configuration of the Proposed Network
3.2.1. Network Configuration of the Residual Group Channel-Wise Attention Module
3.2.2. Network Configuration of the Residual Spatial-Wise Attention Module
3.3. Experimental Setup
4. Experiments and Results
4.1. Experimental Datasets
4.2. Experimental Parameter Discussion
4.2.1. Effect of Different Ratios of the Training, Validation, and Test Datasets
4.2.2. Effect of the Number of Groups
4.3. Classification Results Comparison with State-of-the-Art
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Chen, H.-B.; Jiang, S.; He, G.; Zhang, B.; Yu, H. TEANS: A Target Enhancement and Attenuated Nonmaximum Suppression Object Detector for Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2020, 1–5. [Google Scholar] [CrossRef]
- Borana, S.L.; Yadav, S.K.; Parihar, S.K. Hyperspectral Data Analysis for Arid Vegetation Species: Smart & Sustainable Growth. In Proceedings of the 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), Greater Noida, India, 18–19 October 2019; pp. 495–500. [Google Scholar]
- Wang, J.; Qin, Q.; Zhao, J.; Ye, X.; Qin, X.; Yang, X.; Wang, J.; Zheng, X.; Sun, Y. A knowledge-based method for road damage detection using high-resolution remote sensing image. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3564–3567. [Google Scholar]
- Ren, Y.; Liu, Y. Geological disaster detection from remote sensing image based on experts’ knowledge and image features. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 677–680. [Google Scholar]
- Villalon-Turrubiates, I.E. Identification model for large remote sensing datasets applied to environmental analysis within Mexico. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 3019–3022. [Google Scholar]
- Ma, W.; Gong, C.; Hu, Y.; Meng, P.; Xu, F. The Hughes phenomenon in hyperspectral classification based on the ground spectrum of grasslands in the region around Qinghai Lake. In Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2013: Imaging Spectrometer Technologies and Applications, Beijing, China, 30 August 2013; Volume 8910, p. 89101G. [Google Scholar]
- Gan, Y.; Luo, F.; Liu, J.; Lei, B.; Zhang, T.; Liu, K. Feature Extraction Based Multi-Structure Manifold Embedding for Hyperspectral Remote Sensing Image Classification. IEEE Access 2017, 5, 25069–25080. [Google Scholar] [CrossRef]
- Yuan, Y.; Lin, J.; Wang, Q. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization. IEEE Trans. Cybern. 2016, 46, 2966–2977. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Lin, J.; Yuan, Y. Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Jia, X.; Zhang, B. Superpixel-based Markov random field for classification of hyperspectral images. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Melbourne, VIC, Australia, 21–26 July 2013; pp. 3491–3494. [Google Scholar]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef] [Green Version]
- Kang, X.; Li, S.; Benediktsson, J.A. Spectral–Spatial Hyperspectral Image Classification with Edge-Preserving Filtering. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2666–2677. [Google Scholar] [CrossRef]
- Pal, M.; Foody, G.M. Feature Selection for Classification of Hyperspectral Data by SVM. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2297–2307. [Google Scholar] [CrossRef] [Green Version]
- Peng, J.; Zhou, Y.; Chen, C.L.P. Region-Kernel-Based Support Vector Machines for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4810–4824. [Google Scholar] [CrossRef]
- Li, S.; Zhu, X.; Liu, Y.; Bao, J. Adaptive spatial-spectral feature learning for hyperspectral image classification. IEEE Access 2019, 7, 61534–61547. [Google Scholar] [CrossRef]
- Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- He, N.; Paoletti, M.E.; Haut, J.M.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Feature Extraction with Multiscale Covariance Maps for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 755–769. [Google Scholar] [CrossRef]
- Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral image classification with deep feature fusion network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
- Bai, Y.; Zhang, Q.; Lu, Z.; Zhang, Y. SSDC-DenseNet: A Cost-Effective End-to-End Spectral-Spatial Dual-Channel Dense Network for Hyperspectral Image Classification. IEEE Access 2019, 7, 84876–84889. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Ma, L.; Jiang, H.; Zhao, H. Deep residual networks for hyperspectral image classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 1824–1827. [Google Scholar]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Li, G.; Du, S.; Tan, W.; Gao, F. Three-dimensional densely connected convolutional network for hyperspectral remote sensing image classification. J. Appl. Remote Sens. 2019, 13, 016519. [Google Scholar] [CrossRef]
- Wang, W.; Dou, S.; Jiang, Z.; Sun, L. A Fast Dense Spectral–Spatial Convolution Network Framework for Hyperspectral Images Classification. Remote Sens. 2018, 10, 1068. [Google Scholar] [CrossRef] [Green Version]
- Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Three-Dimensional ResNeXt Network Using Feature Fusion and Label Smoothing for Hyperspectral Image Classification. Sensors 2020, 20, 1652. [Google Scholar] [CrossRef] [Green Version]
- Mu, C.; Guo, Z.; Liu, Y. A Multi-Scale and Multi-Level Spectral-Spatial Feature Fusion Network for Hyperspectral Image Classification. Remote Sens. 2020, 12, 125. [Google Scholar] [CrossRef] [Green Version]
- Ullah, I.; Manzo, M.; Shah, M.; Madden, M. Graph Convolutional Networks: Analysis, improvements and results. arXiv 2019, arXiv:191209592. [Google Scholar]
- Qin, A.; Shang, Z.; Tian, J.; Wang, Y.; Zhang, T.; Tang, Y.Y. Spectral–Spatial Graph Convolutional Networks for Semisupervised Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 241–245. [Google Scholar] [CrossRef]
- Ma, W.; Yang, Q.; Wu, Y.; Zhao, W.; Zhang, X. Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 1307. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision –ECCV 2018, Munich, Germany, 8–14 September 2018; Volume 11211, pp. 3–19, ISBN 978-3-030-01233-5. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Sun, H.; Zheng, X.; Lu, X.; Wu, S. Spectral-Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3232–3245. [Google Scholar] [CrossRef]
- Gao, H.; Yang, Y.; Yao, D.; Li, C. Hyperspectral Image Classification with Pre-Activation Residual Attention Network. IEEE Access 2019, 7, 176587–176599. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Wang, L.; Peng, J.; Sun, W. Spatial–Spectral Squeeze-and-Excitation Residual Network for Hyperspectral Image Classification. Remote Sens. 2019, 11, 884. [Google Scholar] [CrossRef] [Green Version]
- Feng, J.; Feng, X.; Chen, J.; Cao, X.; Zhang, X.; Jiao, L.; Yu, T. Generative Adversarial Networks Based on Collaborative Learning and Attention Mechanism for Hyperspectral Image Classification. Remote Sens. 2020, 12, 1149. [Google Scholar] [CrossRef] [Green Version]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 8–10 June 2015; pp. 3431–3440. [Google Scholar]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar]
- Computational Intelligence Group of the Basque University (UPV/EHU). Hyperspectral Remote Sensing Scenes. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 30 April 2020).
- Xu, Y.; Zhang, L.; Du, B.; Zhang, F. Spectral–spatial unified networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5893–5909. [Google Scholar] [CrossRef]
Layer | Output Size | RGCSA | Connected to |
---|---|---|---|
Input | |||
CONVBN | Input | ||
Block1 | same | CONVBN | |
Block2 | Block1 | ||
Block3 | Block2 | ||
Block4 | Block3 | ||
GAP | 512 | Block4 | |
Dense (SoftMax) | 16 | 16 | GAP |
Layer | Output Size | RGCA | Connected to |
---|---|---|---|
CONV3D | Group1 | ||
CONVBN | same | CONV3D | |
ReLU1 | CONVBN | ||
GAP (Reshape) | ReLU1 | ||
CONV3D | GAP | ||
ReLU2 | CONV3D | ||
CONV3D | ReLU2 | ||
Sigmoid | CONV3D | ||
Multiply | Sigmoid, ReLU1 | ||
Add | Multiply, CONV3D |
Layer | Output Size | RSA | Connected to |
---|---|---|---|
BN | Concat | ||
CONVBN | same | BN | |
ReLU1 | CONVBN | ||
CONV3D | ReLU1 | ||
ReLU2 | CONV3D | ||
CONV3D | ReLU2 | ||
ReLU3 | CONV3D | ||
Transposed Conv | ReLU3 | ||
ReLU4 | Transposed Conv | ||
Transposed Conv | ReLU4 | ||
Sigmoid | Transposed Conv | ||
Multiply | Sigmoid, ReLU1 | ||
Add | Multiply, BN |
No. | Class | Train | Val | Test | Total Samples |
---|---|---|---|---|---|
1 | Alfalfa | 14 | 1 | 31 | 46 |
2 | Corn-notill | 429 | 131 | 868 | 1428 |
3 | Corn-mintill | 249 | 83 | 498 | 830 |
4 | Corn | 72 | 22 | 143 | 237 |
5 | Grass-pasture | 145 | 42 | 296 | 483 |
6 | Grass-trees | 220 | 69 | 441 | 730 |
7 | Grass-pasture-mowed | 9 | 3 | 16 | 28 |
8 | Hay-windrowed | 144 | 55 | 279 | 478 |
9 | Oats | 6 | 4 | 10 | 20 |
10 | Soybean-notill | 292 | 94 | 586 | 972 |
11 | Soybean-mintill | 737 | 264 | 1454 | 2455 |
12 | Soybean-clean | 178 | 56 | 359 | 593 |
13 | Wheat | 62 | 26 | 117 | 205 |
14 | Woods | 380 | 136 | 749 | 1265 |
15 | Buildings-Grass-Trees-Drives | 116 | 34 | 236 | 386 |
16 | Stone-Steel-Towers | 28 | 5 | 60 | 93 |
Total | 3081 | 1025 | 6143 | 10,249 |
No. | Class | Train | Val | Test | Total Samples |
---|---|---|---|---|---|
1 | Asphalt | 1327 | 670 | 4634 | 6631 |
2 | Meadows | 3730 | 1810 | 13,109 | 18,649 |
3 | Gravel | 420 | 241 | 1438 | 2099 |
4 | Trees | 613 | 333 | 2118 | 3064 |
5 | Painted metal sheets | 269 | 134 | 942 | 1345 |
6 | Bare Soil | 1006 | 500 | 3523 | 5029 |
7 | Bitumen | 266 | 133 | 931 | 1330 |
8 | Self-Blocking Bricks | 737 | 363 | 2582 | 3682 |
9 | Shadows | 190 | 97 | 660 | 947 |
Total | 8558 | 4281 | 29,937 | 42,776 |
No. | Class | Train | Val | Test | Total Samples |
---|---|---|---|---|---|
1 | Scrub | 153 | 78 | 530 | 761 |
2 | Willow swamp | 49 | 29 | 165 | 243 |
3 | CP hammock | 52 | 28 | 176 | 256 |
4 | Slash pine | 51 | 31 | 170 | 252 |
5 | Oak/Broadleaf | 33 | 18 | 110 | 161 |
6 | Hardwood | 46 | 22 | 161 | 229 |
7 | Swamp | 21 | 4 | 80 | 105 |
8 | Graminoid marsh | 87 | 45 | 299 | 431 |
9 | Spartina marsh | 104 | 39 | 377 | 520 |
10 | Cattail marsh | 81 | 40 | 283 | 404 |
11 | Salt marsh | 84 | 39 | 296 | 419 |
12 | Mud flats | 101 | 61 | 341 | 503 |
13 | Water | 186 | 87 | 654 | 927 |
Total | 1048 | 521 | 3642 | 5211 |
Ratios | Training Time (s) | Test Time (s) | OA (%) | AA (%) | Kappa × 100 |
---|---|---|---|---|---|
2:1:7 | 10,861.78 | 99.90 | 99.52 | 99.22 | 99.53 |
3:1:6 | 15,769.93 | 85.99 | 99.87 | 99.88 | 99.85 |
4:1:5 | 12,320.78 | 72.52 | 99.86 | 99.77 | 99.84 |
5:1:4 | 15,138.30 | 59.03 | 99.86 | 99.74 | 99.82 |
Ratios | Training Time (s) | Test Time (s) | OA (%) | AA (%) | Kappa × 100 |
---|---|---|---|---|---|
2:1:7 | 25,837.94 | 235.55 | 100.0 | 99.99 | 99.99 |
3:1:6 | 37,310.21 | 205.73 | 99.97 | 99.98 | 99.96 |
4:1:5 | 29,345.04 | 171.84 | 99.98 | 99.97 | 99.97 |
5:1:4 | 36,296.38 | 135.85 | 99.98 | 99.98 | 99.98 |
Ratios | Training Time (s) | Test Time (s) | OA (%) | AA (%) | Kappa × 100 |
---|---|---|---|---|---|
2:1:7 | 5142.95 | 47.34 | 100.0 | 100.0 | 100.0 |
3:1:6 | 7292.39 | 39.70 | 100.0 | 99.99 | 99.98 |
4:1:5 | 5779.27 | 33.82 | 99.98 | 99.98 | 99.99 |
5:1:4 | 7094.23 | 28.22 | 99.99 | 99.98 | 99.98 |
Datasets | G | Params | Training Time (s) | Test Time (s) | OA (%) |
---|---|---|---|---|---|
IN | 6 | 2,974,912 | 11,923.85 | 64.96 | 99.54 |
8 | 4,489,120 | 15,769.93 | 85.99 | 99.87 | |
10 | 6,264,960 | 21,401.03 | 113.47 | 99.87 | |
UP | 6 | 2,972,224 | 20,087.09 | 183.93 | 99.97 |
8 | 4,485,536 | 25,837.94 | 235.55 | 100.0 | |
10 | 6,260,480 | 34,160.93 | 299.13 | 99.94 | |
KSC | 6 | 2,973,760 | 3887.32 | 36.16 | 99.97 |
8 | 4,487,584 | 5142.95 | 47.34 | 100.0 | |
10 | 6,263,040 | 6734.43 | 58.56 | 99.97 |
SVM | SSRN | 3D-ResNeXt | DBMA | SSAN | RGCSA | |
---|---|---|---|---|---|---|
OA (%) | 81.67 | 99.46 | 99.79 | 98.19 | 98.64 | 99.87 |
AA (%) | 79.84 | 93.05 | 99.71 | 96.31 | 97.45 | 99.92 |
Kappa × 100 | 78.76 | 99.39 | 99.70 | 97.94 | 97.50 | 99.85 |
1 | 96.78 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 78.74 | 100.0 | 100.0 | 97.10 | 97.65 | 100.0 |
3 | 82.26 | 99.00 | 99.80 | 99.03 | 98.69 | 99.80 |
4 | 99.03 | 98.59 | 98.59 | 92.20 | 96.95 | 99.29 |
5 | 93.75 | 99.65 | 99.30 | 99.26 | 99.15 | 100.0 |
6 | 85.96 | 100.0 | 100.0 | 98.20 | 98.95 | 100.0 |
7 | 40.00 | 100.0 | 100.0 | 81.25 | 97.65 | 100.0 |
8 | 91.80 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
9 | 0 | 0 | 100.0 | 85.71 | 89.94 | 100.0 |
10 | 96.00 | 97.44 | 100.0 | 98.00 | 99.14 | 100.0 |
11 | 70.94 | 99.73 | 99.59 | 98.46 | 99.12 | 99.79 |
12 | 74.73 | 99.72 | 99.72 | 98.15 | 98.95 | 99.87 |
13 | 99.04 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
14 | 94.29 | 99.74 | 100.0 | 99.74 | 99.96 | 100.0 |
15 | 85.11 | 100.0 | 100.0 | 96.12 | 98.14 | 100.0 |
16 | 96.78 | 95.00 | 98.31 | 97.67 | 97.33 | 100.0 |
SVM | SSRN | 3D-ResNeXt | DBMA | SSAN | RGCSA | |
---|---|---|---|---|---|---|
OA (%) | 90.58 | 99.97 | 99.93 | 98.88 | 99.05 | 100.0 |
AA (%) | 92.99 | 99.96 | 99.91 | 98.71 | 98.91 | 99.99 |
Kappa × 100 | 87.21 | 99.96 | 99.91 | 98.50 | 98.64 | 100.0 |
1 | 87.24 | 99.85 | 99.85 | 99.37 | 99.45 | 99.98 |
2 | 89.93 | 100.0 | 99.99 | 99.73 | 99.84 | 100.0 |
3 | 86.48 | 100.0 | 99.59 | 99.16 | 98.68 | 100.0 |
4 | 99.95 | 99.95 | 100.0 | 98.21 | 99.21 | 100.0 |
5 | 95.78 | 99.89 | 100.0 | 100.0 | 98.16 | 100.0 |
6 | 97.69 | 99.97 | 100.0 | 97.45 | 98.36 | 100.0 |
7 | 95.44 | 100.0 | 100.0 | 1000 | 99.11 | 100.0 |
8 | 84.40 | 100.0 | 99.77 | 95.12 | 98.26 | 100.0 |
9 | 100.0 | 100.0 | 100.0 | 99.36 | 99.12 | 100.0 |
SVM | SSRN | 3D-ResNeXt | DBMA | SSAN | RGCSA | |
---|---|---|---|---|---|---|
OA (%) | 80.29 | 99.97 | 99.67 | 99.72 | 99.62 | 100.0 |
AA (%) | 65.64 | 99.95 | 99.30 | 99.42 | 99.53 | 99.99 |
Kappa × 100 | 77.98 | 99.97 | 99.63 | 99.50 | 99.58 | 99.99 |
1 | 92.16 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
2 | 86.16 | 99.40 | 99.38 | 97.16 | 98.41 | 100.0 |
3 | 42.55 | 100.0 | 97.40 | 98.45 | 97.65 | 100.0 |
4 | 67.69 | 100.0 | 99.40 | 100.0 | 99.45 | 99.99 |
5 | 0 | 100.0 | 95.76 | 1000 | 100.0 | 100.0 |
6 | 54.71 | 100.0 | 100.0 | 99.58 | 99.69 | 100.0 |
7 | 0 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
8 | 65.12 | 100.0 | 100.0 | 99.56 | 100.0 | 100.0 |
9 | 67.82 | 100.0 | 100.0 | 100.0 | 100.0 | 99.99 |
10 | 93.4 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
11 | 100.0 | 100.0 | 100.0 | 99.89 | 99.42 | 100.0 |
12 | 83.75 | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
13 | 100.0 | 100.0 | 100.0 | 97.75 | 99.29 | 100.0 |
Datasets | Methods | Params | Training Time (s) | Test Time (s) |
---|---|---|---|---|
IN | 3D-ResNeXt | 1,554,288 | 3054.29 | 20.68 |
RGCA | 2,736,992 | 10,629.34 | 55.43 | |
RSA | 3,290,400 | 12,593.92 | 71.47 | |
RGCSA | 4,489,120 | 15,769.93 | 85.99 | |
UP | 3D-ResNeXt | 1,550,704 | 6077.30 | 54.85 |
RGCA | 2,733,408 | 18,297.08 | 155.33 | |
RSA | 3,286,816 | 18,801.38 | 184.43 | |
RGCSA | 4,485,536 | 25,837.94 | 235.55 | |
KSC | 3D-ResNeXt | 1,552,752 | 1253.42 | 10.33 |
RGCA | 2,735,456 | 3584.96 | 30.23 | |
RSA | 3,288,864 | 4044.99 | 38.97 | |
RGCSA | 4,487,584 | 5142.95 | 47.34 |
Method | 2:1:7 | 3:1:6 | 4:1:5 | 5:1:4 | |
---|---|---|---|---|---|
Training Time (s) | SSRN | 942.89 | 1059.21 | 1110.62 | 1262.90 |
3D-ResNeXt | 2966.77 | 3054.29 | 2974.87 | 4035.33 | |
RGCSA | 10,861.78 | 15,769.93 | 12,320.78 | 15,138.30 | |
Test Time (s) | SSRN | 9.63 | 8.36 | 7.25 | 5.51 |
3D-ResNeXt | 25.87 | 20.68 | 16.65 | 14.75 | |
RGCSA | 99.90 | 85.99 | 72.52 | 59.03 |
Method | 2:1:7 | 3:1:6 | 4:1:5 | 5:1:4 | |
---|---|---|---|---|---|
Training Time (s) | SSRN | 2469.97 | 2821.26 | 2750.48 | 3282.24 |
3D-ResNeXt | 6077.30 | 7095.93 | 6857.42 | 8410.72 | |
RGCSA | 25,837.94 | 37,310.21 | 29,345.04 | 36,296.38 | |
Test Time (s) | SSRN | 25.01 | 23.24 | 17.83 | 13.41 |
3D-ResNeXt | 54.85 | 52.18 | 39.32 | 36.74 | |
RGCSA | 235.55 | 205.73 | 171.84 | 135.85 |
Method | 2:1:7 | 3:1:6 | 4:1:5 | 5:1:4 | |
---|---|---|---|---|---|
Training Time (s) | SSRN | 447.09 | 643.90 | 498.93 | 610.44 |
3D-ResNeXt | 1253.42 | 1384.68 | 1345.30 | 1632.73 | |
RGCSA | 5142.95 | 7292.39 | 5779.27 | 7094.23 | |
Test Time (s) | SSRN | 4.62 | 4.06 | 3.31 | 2.55 |
3D-ResNeXt | 10.33 | 8.66 | 7.22 | 5.75 | |
RGCSA | 47.34 | 39.70 | 33.82 | 28.22 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Residual Group Channel and Space Attention Network for Hyperspectral Image Classification. Remote Sens. 2020, 12, 2035. https://doi.org/10.3390/rs12122035
Wu P, Cui Z, Gan Z, Liu F. Residual Group Channel and Space Attention Network for Hyperspectral Image Classification. Remote Sensing. 2020; 12(12):2035. https://doi.org/10.3390/rs12122035
Chicago/Turabian StyleWu, Peida, Ziguan Cui, Zongliang Gan, and Feng Liu. 2020. "Residual Group Channel and Space Attention Network for Hyperspectral Image Classification" Remote Sensing 12, no. 12: 2035. https://doi.org/10.3390/rs12122035