A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture
"> Figure 1
<p>Two kinds of finite difference operators. (<b>a</b>) first-level gradient operator, (<b>b</b>) second-level gradient operator.</p> "> Figure 2
<p>Comparison of structural information. (<b>a</b>) PAN image, (<b>b</b>) horizontal structural information with first-level gradient operator, (<b>c</b>) vertical structural information with first-level gradient operator, (<b>d</b>) horizontal structural information with second-level gradient operator, (<b>e</b>) vertical structural information with second-level gradient operator.</p> "> Figure 3
<p>Detailed architectures of the Generator.</p> "> Figure 4
<p>Detailed architectures of the Discriminator.</p> "> Figure 5
<p>Flow chart of reduced-resolution exmperiment.</p> "> Figure 6
<p>Fusion results from the reduced-resolution experiment on the Gaofen-2 dataset.</p> "> Figure 7
<p>Fusion results from the reduced-resolution experiment on the WorldView-2 dataset.</p> "> Figure 8
<p>Residual images from the reduced-resolution experiment on the Gaofen-2 dataset.</p> "> Figure 9
<p>Residual images from the reduced-resolution experiment on the WorldView-2 dataset.</p> "> Figure 10
<p>Fusion results from the full-resolution experiment on the WorldView-2 dataset.</p> "> Figure 11
<p>The performance of the land area in the reduced-resolution experiment.</p> "> Figure 12
<p>The performance of the land area in the full-resolution experiment.</p> "> Figure 13
<p>The performance of the vegetation area in the reduced-resolution experiment.</p> "> Figure 14
<p>The performance of the vegetation area in the full-resolution experiment.</p> "> Figure 15
<p>The performance of the building area in the reduced-resolution experiment.</p> "> Figure 16
<p>The performance of the building area in the full-resolution experiment.</p> "> Figure 17
<p>The performance of the road area in the reduced-resolution experiment.</p> "> Figure 18
<p>The performance of the road area in the full-resolution experiment.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Method
3.1. Pansharpening Based on a Variational Model and a GAN
3.2. Multi-Stream Structure Generator and Discriminator
4. Experiments
4.1. Experimental Setup
4.2. Reduced-Resolution Experiment
4.3. Ablation Experiment
4.4. Full-Resolution Experiment
4.5. Local Area Experiment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Loncan, L.; De Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef] [Green Version]
- Gaetano, R.; Masi, G.; Poggi, G.; Verdoliva, L.; Scarpa, G. Marker-controlled watershed-based segmentation of multiresolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2987–3004. [Google Scholar] [CrossRef]
- Liu, P.; Xiao, L.; Zhang, J.; Naz, B. Spatial-Hessian-feature-guided variational model for pan-sharpening. IEEE Trans. Geosci. Remote Sens. 2015, 54, 2235–2253. [Google Scholar] [CrossRef]
- Shettigara, V.K. A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens. 1992, 58, 561–567. [Google Scholar]
- Feng, J.; Zeren, L.; Xia, C.; Zhiliang, W. Remote sensing image fusion method based on PCA and NSCT transform. J. Graphics 2017, 38, 247–252. [Google Scholar]
- Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An adaptive IHS pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef] [Green Version]
- Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2007, 46, 228–236. [Google Scholar] [CrossRef]
- Shensa, M.J. The discrete wavelet transform: Wedding the a trous and Mallat algorithms. IEEE Trans. Signal Process. 1992, 40, 2464–2482. [Google Scholar] [CrossRef] [Green Version]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
- Zheng, X.H.Z.Q. A fusion method of satellite remote sensing image based on IHS transform and curvelet transform. J. South China Univ. Technol. 2016, 44, 58. [Google Scholar]
- Zeng, D.; Hu, Y.; Huang, Y.; Xu, Z.; Ding, X. Pan-sharpening with structural consistency and ℓ1/2 gradient prior. Remote Sens. Lett. 2016, 7, 1170–1179. [Google Scholar] [CrossRef]
- Li, S.; Yang, B. A new pan-sharpening method using a compressed sensing technique. IEEE Trans. Geosci. Remote Sens. 2010, 49, 738–746. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
- Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef] [Green Version]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A deep network architecture for pan-sharpening. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5449–5457. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Liu, X.; Wang, Y.; Liu, Q. PSGAN: A generative adversarial network for remote sensing image pan-sharpening. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 873–877. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. arXiv 2014, arXiv:1406.2661. [Google Scholar]
- Zhang, L.; Li, W.; Zhang, C.; Lei, D. A generative adversarial network with structural enhancement and spectral supplement for pan-sharpening. Neural Comput. Appl. 2020, 32, 18347–18359. [Google Scholar] [CrossRef]
- Xiang, Z.; Xiao, L.; Liu, P.; Zhang, Y. A Multi-Scale Densely Deep Learning Method for Pansharpening. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 2786–2789. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Li, X.; Xu, F.; Lyu, X.; Tong, Y.; Chen, Z.; Li, S.; Liu, D. A remote-sensing image pan-sharpening method based on multi-scale channel attention residual network. IEEE Access 2020, 8, 27163–27177. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Jiang, J. Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
- Wang, J.; Shao, Z.; Huang, X.; Lu, T.; Zhang, R.; Ma, J. Pan-sharpening via High-pass Modification Convolutional Neural Network. arXiv 2021, arXiv:2105.11576. [Google Scholar]
- Chen, C.; Li, Y.; Liu, W.; Huang, J. Image fusion with local spectral consistency and dynamic gradient sparsity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2760–2765. [Google Scholar]
- Wang, T.; Fang, F.; Li, F.; Zhang, G. High-Quality Bayesian Pansharpening. IEEE Trans. Image Process. 2019, 28, 227–239. [Google Scholar] [CrossRef]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. Proc. Icml 2013, 30, 3. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Vivone, G.; Alparone, L.; Chanussot, J.; Dalla Mura, M.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2014, 53, 2565–2586. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wald, L. Data Fusion: Definitions and Architectures—Fusion of Images of Different Spatial Resolutions; Les Presses des Mines: Paris, France, 2002. [Google Scholar]
- Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
- Zhou, J.; Civco, D.; Silander, J. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
- Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef] [Green Version]
Method | Time (min) | Parameters | FLOPs |
---|---|---|---|
PCNN | 96.280 K | 1.577 G | |
PANNET | 78.920 K | 1.293 G | |
PSGAN | 2.441 M | 33.787 G | |
Proposed | 6.702 M | 25.884 G |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
ATWT | 1.7525 | 2.1664 | 0.5545 | 0.5016 |
DGSF | 2.0878 | 2.0419 | 0.5626 | 0.5308 |
MTF-GLP | 1.8830 | 2.1133 | 0.6056 | 0.5330 |
BDSD | 2.1964 | 1.8314 | 0.6620 | 0.6013 |
PCNN | 1.3925 | 1.2026 | 0.8081 | 0.7897 |
PANNET | 1.2200 | 1.0498 | 0.8406 | 0.8342 |
PSGAN | 0.9567 | 0.8445 | 0.9098 | 0.9167 |
Self_comparison | 0.9126 | 0.7946 | 0.9147 | 0.9231 |
Proposed | 0.8671 | 0.7604 | 0.9294 | 0.9348 |
Reference | 0 | 0 | 1 | 1 |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
ATWT | 6.5497 | 3.0596 | 0.7602 | 0.7361 |
DGSF | 6.9088 | 4.0796 | 0.6969 | 0.7237 |
MTF-GLP | 6.5415 | 4.4157 | 0.7514 | 0.7360 |
BDSD | 8.3306 | 3.4183 | 0.7365 | 0.7062 |
PCNN | 5.4269 | 3.4702 | 0.8424 | 0.8929 |
PANNET | 4.6383 | 2.9386 | 0.8538 | 0.9141 |
PSGAN | 3.7764 | 2.4885 | 0.8780 | 0.9495 |
Self_comparison | 3.6108 | 2.3521 | 0.8810 | 0.9523 |
Proposed | 3.5641 | 2.3519 | 0.8868 | 0.9578 |
Reference | 0 | 0 | 1 | 1 |
Dataset | Backbone | Second-Level | Multi-Stream | Name | SAM | ERGAS | SCC | |
---|---|---|---|---|---|---|---|---|
Gaofen-2 | ✔ | ✔ | One_subnet | 0.8813 | 0.7876 | 0.9189 | 0.9312 | |
✔ | ✔ | Only_spatial1 | 0.9047 | 0.7802 | 0.9203 | 0.9306 | ||
✔ | ✔ | ✔ | Proposed | 0.8671 | 0.7604 | 0.9294 | 0.9348 | |
WorldView-2 | ✔ | ✔ | One_subnet | 3.6845 | 2.3758 | 0.8832 | 0.9543 | |
✔ | ✔ | Only_spatial1 | 3.5872 | 2.3659 | 0.8841 | 0.9509 | ||
✔ | ✔ | ✔ | Proposed | 3.5641 | 2.3519 | 0.8868 | 0.9578 | |
Reference | 0 | 0 | 1 | 1 |
Fusion Method | QNR | ||
---|---|---|---|
ATWT | 0.0728 | 0.1212 | 0.8148 |
DGSF | 0.1029 | 0.1202 | 0.7893 |
MTF-GLP | 0.0949 | 0.1438 | 0.7750 |
BDSD | 0.0426 | 0.0637 | 0.8964 |
PCNN | 0.0575 | 0.0655 | 0.8808 |
PANNET | 0.0130 | 0.0312 | 0.9563 |
PSGAN | 0.0142 | 0.0410 | 0.9453 |
Self_comparison | 0.0134 | 0.0297 | 0.9573 |
Proposed | 0.0127 | 0.0239 | 0.9638 |
Reference | 0 | 0 | 1 |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
PCNN | 2.1840 | 1.4834 | 0.6787 | 0.9313 |
PANNET | 1.6128 | 1.1908 | 0.6794 | 0.9426 |
PSGAN | 1.5504 | 1.0725 | 0.6662 | 0.9564 |
Proposed | 1.4739 | 1.0197 | 0.6819 | 0.9610 |
Reference | 0 | 0 | 1 | 1 |
Fusion Method | QNR | ||
---|---|---|---|
PCNN | 0.1531 | 0.1333 | 0.7358 |
PANNET | 0.0693 | 0.0679 | 0.8694 |
PSGAN | 0.0582 | 0.0860 | 0.8620 |
Proposed | 0.0469 | 0.0399 | 0.9157 |
Reference | 0 | 0 | 1 |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
PCNN | 4.9805 | 3.0891 | 0.8056 | 0.9106 |
PANNET | 4.3985 | 2.6672 | 0.8192 | 0.9178 |
PSGAN | 3.7217 | 2.2817 | 0.8497 | 0.9524 |
Proposed | 3.5140 | 2.1630 | 0.8582 | 0.9603 |
Reference | 0 | 0 | 1 | 1 |
Fusion Method | QNR | ||
---|---|---|---|
PCNN | 0.0926 | 0.1294 | 0.7902 |
PANNET | 0.0355 | 0.0688 | 0.8982 |
PSGAN | 0.0349 | 0.0843 | 0.8838 |
Proposed | 0.0227 | 0.0457 | 0.9327 |
Reference | 0 | 0 | 1 |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
PCNN | 4.8930 | 3.0488 | 0.8580 | 0.9136 |
PANNET | 4.2288 | 2.7276 | 0.8651 | 0.9281 |
PSGAN | 3.4164 | 2.2940 | 0.8910 | 0.9586 |
Proposed | 3.1903 | 2.1578 | 0.9012 | 0.9667 |
Reference | 0 | 0 | 1 | 1 |
Fusion Method | QNR | ||
---|---|---|---|
PCNN | 0.0419 | 0.0628 | 0.8984 |
PANNET | 0.0205 | 0.0377 | 0.9428 |
PSGAN | 0.0248 | 0.0496 | 0.9272 |
Proposed | 0.0211 | 0.0270 | 0.9526 |
Reference | 0 | 0 | 1 |
Method | SAM | ERGAS | SCC | |
---|---|---|---|---|
PCNN | 4.2356 | 2.8598 | 0.8164 | 0.8937 |
PANNET | 3.6831 | 2.5245 | 0.8199 | 0.9083 |
PSGAN | 3.0803 | 2.2228 | 0.8453 | 0.9377 |
Proposed | 2.8836 | 2.0635 | 0.8577 | 0.9495 |
Reference | 0 | 0 | 1 | 1 |
Fusion Method | QNR | ||
---|---|---|---|
PCNN | 0.1157 | 0.1483 | 0.7534 |
PANNET | 0.0482 | 0.0896 | 0.8666 |
PSGAN | 0.0536 | 0.1109 | 0.8415 |
Proposed | 0.0517 | 0.0477 | 0.9030 |
Reference | 0 | 0 | 1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Li, W.; Huang, H.; Lei, D. A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture. Remote Sens. 2021, 13, 2423. https://doi.org/10.3390/rs13122423
Zhang L, Li W, Huang H, Lei D. A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture. Remote Sensing. 2021; 13(12):2423. https://doi.org/10.3390/rs13122423
Chicago/Turabian StyleZhang, Liping, Weisheng Li, Hefeng Huang, and Dajiang Lei. 2021. "A Pansharpening Generative Adversarial Network with Multilevel Structure Enhancement and a Multistream Fusion Architecture" Remote Sensing 13, no. 12: 2423. https://doi.org/10.3390/rs13122423