A New End-to-End Multi-Dimensional CNN Framework for Land Cover/Land Use Change Detection in Multi-Source Remote Sensing Datasets
"> Figure 1
<p>Multispectral datasets: (<b>a</b>) and (<b>b</b>) show the false-color composite of the multispectral images acquired over Abudhabi City, in United Arab Emirates (UAE) on 20 January 2016, and 28 March 2018, respectively, and (<b>c</b>) is the corresponding binary ground truth data which illustrates the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas. (<b>d</b>) and (<b>e</b>) are the false-color composites of the multispectral images acquired over the Saclay area on 15 March 2016, and 29 October 2017, respectively, and (<b>f</b>) is the corresponding ground truth data, which white and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 2
<p>Hyperspectral datasets: (<b>a</b>) and (<b>b</b>) are a false-color composite of the <span class="html-italic">Hyperspectral-River</span> images acquired over a river in Jiangsu province, China on 3 May 2013, and 31 December 2013, respectively, and (<b>c</b>) is the corresponding binary ground truth data, which white and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively. (<b>d</b>) and (<b>e</b>) are a false-color composite of the <span class="html-italic">Hyperspectral-Farmland</span> images acquired over farmland in the USA on 1 May 2004, and 8 May 2007, respectively, and (<b>f</b>) is the corresponding ground truth data in binary format, which white and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 3
<p>PolSAR dataset: (<b>a</b>) and (<b>b</b>) are the RGB composites generated from the Pauli decomposition (Red: |HH − VV|; Green: 2|HV|; Blue: |HH + VV|) for the <span class="html-italic">PolSAR-San Francisco1</span> datasets, and (<b>c</b>) shows the corresponding binary ground truth data, which white and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively. (<b>d</b>) and (<b>e</b>) are Pauli RGB composite of the PolSAR-San Francisco2 images, and (<b>f</b>) shows the corresponding ground truth data in binary format, which white and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively. Note that (<b>a</b>) and (<b>d</b>) were acquired on 18 September 2009, and (<b>b</b>) and (<b>e</b>) were captured on 11 May 2015.</p> "> Figure 4
<p>General flowchart of the proposed supervised binary change detection (CD) method.</p> "> Figure 5
<p>The proposed End-to-End convolutional neural network (CNN) architecture for CD of remote sensing datasets.</p> "> Figure 6
<p>Overview of the three state-of-the-art CD frameworks based on deep learning. (<b>a</b>) General End-to-end Two-dimensional CNN Framework (GETNET) [<a href="#B29-remotesensing-12-02010" class="html-bibr">29</a>], (<b>b</b>) Siamese-concatenate network [<a href="#B44-remotesensing-12-02010" class="html-bibr">44</a>], and (<b>c</b>) Siamese-differencing network [<a href="#B44-remotesensing-12-02010" class="html-bibr">44</a>].</p> "> Figure 7
<p>Comparison of (<b>a</b>) 1-D, (<b>b</b>) 2-D, and (<b>c</b>) 3-D convolution layers.</p> "> Figure 8
<p>The difference between (<b>a</b>) simple differencing and (<b>b</b>) norm <span class="html-italic">l<sub>2</sub></span> in discriminating <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas for <span class="html-italic">Hyperspectral-Farmland</span> datasets.</p> "> Figure 9
<p>The change detection maps produced by different CD methods for the <span class="html-italic">Multispectral-Abudhabi</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 10
<p>The errors in change detection using different methods for the <span class="html-italic">Multispectral-Abudhabi</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP respectively.</p> "> Figure 11
<p>The confusion matrices of different CD methods for the <span class="html-italic">Multispectral-Abudhabi</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> "> Figure 12
<p>The change detection maps produced by different methods for the <span class="html-italic">Multispectral-Saclay</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 13
<p>The change detection errors of different methods for the <span class="html-italic">Multispectral-Saclay</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP pixels, respectively.</p> "> Figure 14
<p>The confusion matrices of different CD methods for the <span class="html-italic">Multispectral-Saclay</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> "> Figure 15
<p>The change detection maps obtained by different methods for the <span class="html-italic">Hyperspectral-River</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 16
<p>The change detection errors of different methods for the <span class="html-italic">Hyperspectral-River</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP pixels respectively.</p> "> Figure 17
<p>The confusion matrices of different CD methods for the <span class="html-italic">Hyperspectral-River</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> "> Figure 18
<p>The change detection maps of different methods for the <span class="html-italic">Hyperspectral-Farmland</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 18 Cont.
<p>The change detection maps of different methods for the <span class="html-italic">Hyperspectral-Farmland</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 19
<p>The error in change detection maps obtained by different methods for the <span class="html-italic">Hyperspectral-Farmland</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP pixels, respectively.</p> "> Figure 19 Cont.
<p>The error in change detection maps obtained by different methods for the <span class="html-italic">Hyperspectral-Farmland</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP pixels, respectively.</p> "> Figure 20
<p>The confusion matrices of different CD methods for the <span class="html-italic">Hyperspectral-Farmland</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> "> Figure 21
<p>The change detection maps of different methods for the <span class="html-italic">PolSAR-San Francisco1</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 22
<p>The errors in change detection of different methods for the <span class="html-italic">PolSAR-San Francisco1</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TN and TP, FN, and FP pixels, respectively.</p> "> Figure 23
<p>The confusion matrices of different CD methods for the <span class="html-italic">PolSAR-San Francisco1</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> "> Figure 24
<p>The change detection maps of different methods for the <span class="html-italic">PolSAR-San Francisco2</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, (<b>g</b>) Proposed Method, and (<b>h</b>) Ground Truth. White and black colors indicate the <span class="html-italic">change</span> and <span class="html-italic">no-change</span> areas, respectively.</p> "> Figure 25
<p>The errors in change detection using different methods for the <span class="html-italic">PolSAR- Francisco2</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method. Black, red, and blue colors indicate TP and TN, FN, and FP pixels, respectively.</p> "> Figure 26
<p>The confusion matrices of different CD methods for the <span class="html-italic">PolSAR-San Francisco2</span> dataset. (<b>a</b>) CVA-SVM, (<b>b</b>) MAD-SVM, (<b>c</b>) PCA-SVM, (<b>d</b>) IR-MAD -SVM, (<b>e</b>) SFA-SVM, (<b>f</b>) 3D-CNN, and (<b>g</b>) Proposed Method.</p> ">
Abstract
:1. Introduction
2. Datasets
2.1. Multispectral Datasets
2.2. Hyperspectral Dataset
2.3. PolSAR Datasets
3. Method
3.1. Pre-Processing
3.2. The Proposed End-to-End CD Network Based on CNN
3.2.1. The Convolution and Pooling Layers
3.2.2. Highlighting Deep Feature
3.2.3. Discriminative Learning and Prediction
3.2.4. Updating Hyperparameter
3.2.5. Evaluation of Model and Stop Condition
3.3. Optimum Model
3.4. Accuracy Assessment
3.4.1. Comparison with Ground Truth Data
3.4.2. Comparison with Other CD Methods
4. Results
4.1. Setting Parameters
4.2. CD of Multispectral Datasets
4.2.1. Multispectral-Abudhabi Data
4.2.2. Multispectral-Saclay Data
4.3. CD of Hyperspectral Datasets
4.3.1. Hyperspectral-River Data
4.3.2. Hyperspectral-Farmland Data
4.4. CD of Polarimetric Datasets
4.4.1. PolSAR-San Francisco-1 Data
4.4.2. PolSAR-San Francisco-2 Data
5. Discussion
5.1. Experimental Setup
5.2. Complexity and Diversity of Classes
5.3. Training Data and Parameter Setting
5.4. Spatial and Spectral Features
5.5. End-to-End Framework
6. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Peduzzi, P. The Disaster Risk, Global Change, and Sustainability Nexus. Sustainability 2019, 11, 957. [Google Scholar] [CrossRef] [Green Version]
- Mahdavi, S.; Salehi, B.; Huang, W.; Amani, M.; Brisco, B. A PolSAR Change Detection Index Based on Neighborhood Information for Flood Mapping. Remote Sens. 2019, 11, 1854. [Google Scholar] [CrossRef] [Green Version]
- Hasanlou, M.; Seydi, S.T.; Shah-Hosseini, R. A Sub-Pixel Multiple Change Detection Approach for Hyperspectral Imagery. Can. J. Remote Sens. 2018, 44, 601–615. [Google Scholar] [CrossRef]
- Wang, M.; Tan, K.; Jia, X.; Wang, X.; Chen, Y. A Deep Siamese Network with Hybrid Convolutional Feature Extraction Module for Change Detection Based on Multi-sensor Remote Sensing Images. Remote Sens. 2020, 12, 205. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Kerle, N. Satellite remote sensing for near-real time data collection. In Geospatial Information Technology for Emergency Response; CRC Press: Boca Raton, FL, USA, 2008; pp. 91–118. [Google Scholar]
- Liu, S.; Zheng, Y.; Dalponte, M.; Tong, X. A novel fire index-based burned area change detection approach using Landsat-8 OLI data. Eur. J. Remote Sens. 2020, 53, 104–112. [Google Scholar] [CrossRef] [Green Version]
- Demir, B.; Bovolo, F.; Bruzzone, L. Updating land-cover maps by classification of image time series: A novel change-detection-driven transfer learning approach. IEEE Trans. Geosci. Remote Sens. 2012, 51, 300–312. [Google Scholar] [CrossRef]
- Leichtle, T. Change Detection for Application in Urban Geography based on Very High Resolution Remote Sensing. Ph.D. Thesis, Humboldt-Universität zu Berlin, Berlin, Germany, 2020. [Google Scholar]
- Saha, S.; Bovolo, F.; Bruzzone, L. Unsupervised deep change vector analysis for multiple-change detection in VHR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3677–3693. [Google Scholar] [CrossRef]
- López-Fandiño, J.; Heras, D.B.; Argüello, F.; Dalla Mura, M. GPU framework for change detection in multitemporal hyperspectral images. Int. J. Parallel Program. 2019, 47, 272–292. [Google Scholar] [CrossRef]
- Parikh, H.; Patel, S.; Patel, V. Classification of SAR and PolSAR images using deep learning: A review. Int. J. Image Data Fusion 2020, 11, 1–32. [Google Scholar] [CrossRef]
- Carranza-García, M.; García-Gutiérrez, J.; Riquelme, J.C. A framework for evaluating land use and land cover classification using convolutional neural networks. Remote Sens. 2019, 11, 274. [Google Scholar] [CrossRef] [Green Version]
- Liu, F.; Jiao, L.; Tang, X.; Yang, S.; Ma, W.; Hou, B. Local restricted convolutional neural network for change detection in polarimetric SAR images. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 818–833. [Google Scholar] [CrossRef] [PubMed]
- Kwan, C. Methods and Challenges Using Multispectral and Hyperspectral Images for Practical Change Detection Applications. Information 2019, 10, 353. [Google Scholar] [CrossRef] [Green Version]
- Wu, C.; Du, B.; Zhang, L. Slow feature analysis for change detection in multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2858–2874. [Google Scholar] [CrossRef]
- Wu, S.; Bai, Y.; Chen, H. Change detection methods based on low-rank sparse representation for multi-temporal remote sensing imagery. Clust. Comput. 2019, 22, 9951–9966. [Google Scholar] [CrossRef]
- Chen, Z.; Leng, X.; Lei, L. Multiple features fusion change detection method based on Two-Level Clustering. In Proceedings of the 2019 International Conference on Robotics, Intelligent Control and Artificial Intelligence, Shanghai, China, 20–22 September 2019; pp. 159–165. [Google Scholar]
- Zhang, W.; Lu, X. The spectral-spatial joint learning for change detection in multispectral imagery. Remote Sens. 2019, 11, 240. [Google Scholar] [CrossRef] [Green Version]
- Papadomanolaki, M.; Verma, S.; Vakalopoulou, M.; Gupta, S.; Karantzalos, K. Detecting urban changes with recurrent neural networks from multitemporal Sentinel-2 data. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 214–217. [Google Scholar]
- Lv, Z.Y.; Liu, T.F.; Zhang, P.; Benediktsson, J.A.; Lei, T.; Zhang, X. Novel adaptive histogram trend similarity approach for land cover change detection by using bitemporal very-high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9554–9574. [Google Scholar] [CrossRef]
- Du, P.; Wang, X.; Chen, D.; Liu, S.; Lin, C.; Meng, Y. An improved change detection approach using tri-temporal logic-verified change vector analysis. ISPRS J. Photogramm. Remote Sens. 2020, 161, 278–293. [Google Scholar] [CrossRef]
- Liu, S.; Marinelli, D.; Bruzzone, L.; Bovolo, F. A review of change detection in multitemporal hyperspectral images: Current techniques, applications, and challenges. IEEE Geosci. Remote Sens. Mag. 2019, 7, 140–158. [Google Scholar] [CrossRef]
- Nielsen, A.A.; Conradsen, K.; Simpson, J.J. Multivariate alteration detection (MAD) and MAF postprocessing in multispectral, bitemporal image data: New approaches to change detection studies. Remote Sens. Environ. 1998, 64, 1–19. [Google Scholar] [CrossRef] [Green Version]
- Wu, C.; Du, B.; Zhang, L. A subspace-based change detection method for hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 815–830. [Google Scholar] [CrossRef]
- Yuan, Y.; Lv, H.; Lu, X. Semi-supervised change detection method for multi-temporal hyperspectral images. Neurocomputing 2015, 148, 363–375. [Google Scholar] [CrossRef]
- Liu, S.; Bruzzone, L.; Bovolo, F.; Du, P. Unsupervised multitemporal spectral unmixing for detecting multiple changes in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2733–2748. [Google Scholar] [CrossRef]
- Wang, B.; Choi, S.-K.; Han, Y.-K.; Lee, S.-K.; Choi, J.-W. Application of IR-MAD using synthetically fused images for change detection in hyperspectral data. Remote Sens. Lett. 2015, 6, 578–586. [Google Scholar] [CrossRef]
- Song, A.; Choi, J.; Han, Y.; Kim, Y. Change detection in hyperspectral images using recurrent 3D fully convolutional networks. Remote Sens. 2018, 10, 1827. [Google Scholar] [CrossRef] [Green Version]
- Wang, Q.; Yuan, Z.; Du, Q.; Li, X. GETNET: A general end-to-end 2-D CNN framework for hyperspectral image change detection. IEEE Trans. Geosci. Remote Sens. 2018, 57, 3–13. [Google Scholar] [CrossRef] [Green Version]
- Marinelli, D.; Bovolo, F.; Bruzzone, L. A novel change detection method for multitemporal hyperspectral images based on binary hyperspectral change vectors. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4913–4928. [Google Scholar] [CrossRef]
- Li, X.; Yuan, Z.; Wang, Q. Unsupervised deep noise modeling for hyperspectral image change detection. Remote Sens. 2019, 11, 258. [Google Scholar] [CrossRef] [Green Version]
- Seydi, S.T.; Hasanlou, M. Hperspectral change detection based on 3D covolution deep learning. In Proceedings of the International Society for Photogrammetry and Remote Sensing (ISPRS) Congress, Nice, France, 14–20 June 2020. [Google Scholar]
- Huang, F.; Yu, Y.; Feng, T. Hyperspectral remote sensing image change detection based on tensor and deep learning. J. Vis. Commun. Image Represent. 2019, 58, 233–244. [Google Scholar] [CrossRef]
- Slagter, B.; Tsendbazar, N.-E.; Vollrath, A.; Reiche, J. Mapping wetland characteristics using temporally dense Sentinel-1 and Sentinel-2 data: A case study in the St. Lucia wetlands, South Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102009. [Google Scholar] [CrossRef]
- Qi, Z.; Yeh, A.G.-O.; Li, X.; Zhang, X. A three-component method for timely detection of land cover changes using polarimetric SAR images. ISPRS J. Photogramm. Remote Sens. 2015, 107, 3–21. [Google Scholar] [CrossRef]
- Ghanbari, M.; Akbari, V. Unsupervised Change Detection in Polarimetric SAR Data with the Hotelling-Lawley Trace Statistic and Minimum-Error Thresholding. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4551–4562. [Google Scholar] [CrossRef]
- Cui, B.; Zhang, Y.; Yan, L.; Wei, J.; Wu, H.a. An Unsupervised SAR Change Detection Method Based on Stochastic Subspace Ensemble Learning. Remote Sens. 2019, 11, 1314. [Google Scholar] [CrossRef] [Green Version]
- Najafi, A.; Hasanlou, M.; Akbari, V. Change detection using distance-based algorithms between synthetic aperture radar polarimetric decompositions. Int. J. Remote Sens. 2019, 40, 6084–6097. [Google Scholar] [CrossRef]
- Zhao, J.; Chang, Y.; Yang, J.; Niu, Y.; Lu, Z.; Li, P. A Novel Change Detection Method Based on Statistical Distribution Characteristics Using Multi-Temporal PolSAR Data. Sensors 2020, 20, 1508. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. Hybridsn: Exploring 3-d-2-d cnn feature hierarchy for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2019, 17, 277–281. [Google Scholar] [CrossRef] [Green Version]
- Daudt, R.C.; Le Saux, B.; Boulch, A.; Gousseau, Y. Urban change detection for multispectral earth observation using convolutional neural networks. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2115–2118. [Google Scholar]
- Seydi, S.T.; Shahhoseini, R. Transformation Based Algorithms for Change Detection in Full Polarimetric Remote Sensing Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 963–967. [Google Scholar] [CrossRef] [Green Version]
- Hu, W.-S.; Li, H.-C.; Pan, L.; Li, W.; Tao, R.; Du, Q. Feature extraction and classification based on spatial-spectral convlstm neural network for hyperspectral images. arXiv 2019, arXiv:1905.03577. [Google Scholar]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
- Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
- Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef] [Green Version]
- Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276. [Google Scholar] [CrossRef] [Green Version]
- Du, J.; Wang, L.; Liu, Y.; Zhou, Z.; He, Z.; Jia, Y. Brain MRI Super-Resolution Using 3D Dilated Convolutional Encoder–Decoder Network. IEEE Access 2020, 8, 18938–18950. [Google Scholar] [CrossRef]
- Chen, C.; Liu, X.; Ding, M.; Zheng, J.; Li, J. 3D dilated multi-fiber network for real-time brain tumor segmentation in mri. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 184–192. [Google Scholar]
- Hu, Z.; Hu, Y.; Liu, J.; Wu, B.; Han, D.; Kurfess, T. 3D separable convolutional neural network for dynamic hand gesture recognition. Neurocomputing 2018, 318, 151–161. [Google Scholar] [CrossRef]
- Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. arXiv 2019, arXiv:1905.03554. [Google Scholar]
- Chen, X.; Kopsaftopoulos, F.; Wu, Q.; Ren, H.; Chang, F.-K. A Self-Adaptive 1D Convolutional Neural Network for Flight-State Identification. Sensors 2019, 19, 275. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Cui, R.; Li, B.; Song, R.; Li, Y.; Du, Q. Hyperspectral Image Super-Resolution with 1D–2D Attentional Convolutional Neural Network. Remote Sens. 2019, 11, 2859. [Google Scholar] [CrossRef] [Green Version]
- Eckle, K.; Schmidt-Hieber, J. A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Netw. 2019, 110, 232–242. [Google Scholar] [CrossRef]
- Agarap, A.F. Deep learning using rectified linear units (relu). arXiv 2018, arXiv:1803.08375. [Google Scholar]
- Chetouani, A.; Treuillet, S.; Exbrayat, M.; Jesset, S. Classification of engraved pottery sherds mixing deep-learning features by compact bilinear pooling. Pattern Recognit. Lett. 2020, 131, 1–7. [Google Scholar] [CrossRef]
- Christlein, V.; Spranger, L.; Seuret, M.; Nicolaou, A.; Král, P.; Maier, A. Deep Generalized Max Pooling. arXiv 2019, arXiv:1908.05040. [Google Scholar]
- Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
- Alexandari, A.M.; Shrikumar, A.; Kundaje, A. Separable Fully Connected Layers Improve Deep Learning Models for Genomics. BioRxiv 2017, 146431. [Google Scholar]
- Kanai, S.; Fujiwara, Y.; Yamanaka, Y.; Adachi, S. Sigsoftmax: Reanalysis of the softmax bottleneck. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; pp. 286–296. [Google Scholar]
- Oland, A.; Bansal, A.; Dannenberg, R.B.; Raj, B. Be careful what you backpropagate: A case for linear output activations & gradient boosting. arXiv 2017, arXiv:1707.04199. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Li, Z.; Gong, B.; Yang, T. Improved dropout for shallow and deep learning. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 9 December 2016; pp. 2523–2531. [Google Scholar]
- Qahtan, A.A.; Alharbi, B.; Wang, S.; Zhang, X. A pca-based change detection framework for multidimensional data streams: Change detection in multidimensional data streams. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 935–944. [Google Scholar]
- Deng, J.; Wang, K.; Deng, Y.; Qi, G. PCA-based land-use change detection and analysis using multitemporal and multisensor satellite data. Int. J. Remote Sens. 2008, 29, 4823–4838. [Google Scholar] [CrossRef]
- Pirrone, D.; Bovolo, F.; Bruzzone, L. A Novel Framework Based on Polarimetric Change Vectors for Unsupervised Multiclass Change Detection in Dual-Pol Intensity SAR Images. IEEE Trans. Geosci. Remote Sens. 2020. [Google Scholar] [CrossRef]
- Hasanlou, M.; Seydi, S.T. Automatic change detection in remotely sensed hyperspectral imagery (Case study: Wetlands and waterbodies). Earth Obs. Geomat. Eng. 2018, 2, 9–25. [Google Scholar]
- Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M. A comparison study of different kernel functions for SVM-based classification of multi-temporal polarimetry SAR data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 40, 281. [Google Scholar] [CrossRef] [Green Version]
- Lameski, P.; Zdravevski, E.; Mingov, R.; Kulakov, A. SVM parameter tuning with grid search and its impact on reduction of model over-fitting. In Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing; Springer: Cham, Switzerland, 2015; pp. 464–474. [Google Scholar]
- Kotsiantis, S.; Kanellopoulos, D.; Pintelas, P. Handling imbalanced datasets: A review. Gests Int. Trans. Comput. Sci. Eng. 2006, 30, 25–36. [Google Scholar]
- Lin, Z.; Hao, Z.; Yang, X.; Liu, X. Several SVM ensemble methods integrated with under-sampling for imbalanced data learning. In Proceedings of the International Conference on Advanced Data Mining and Applications, Beijing, China, 17–19 August 2009; pp. 536–544. [Google Scholar]
- Longadge, R.; Dongre, S. Class imbalance problem in data mining review. arXiv 2013, arXiv:1305.1707. [Google Scholar]
- Ramyachitra, D.; Manikandan, P. Imbalanced Dataset Classification and Solutions: A Review. 2014. Available online: https://www.semanticscholar.org/paper/IMBALANCED-DATASET-CLASSIFICATION-AND-SOLUTIONS-%3A-A-Ramyachitra-Manikandan/3e8ea23ec779f79c16f8f5402c5be2ef403fe8d3?citationIntent=background#citing-papers (accessed on 19 June 2020).
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Huang, W.; Song, G.; Li, M.; Hu, W.; Xie, K. Adaptive Weight Optimization for Classification of Imbalanced Data. In Proceedings of the International Conference on Intelligent Science and Big Data Engineering, Beijing, China, 31 July–2 August 2013; pp. 546–553. [Google Scholar]
- Shah-Hosseini, R.; Homayouni, S.; Safari, A. A hybrid kernel-based change detection method for remotely sensed data in a similarity space. Remote Sens. 2015, 7, 12829–12858. [Google Scholar] [CrossRef] [Green Version]
- Song, A.; Kim, Y. Transfer Change Rules from Recurrent Fully Convolutional Networks for Hyperspectral Unmanned Aerial Vehicle Images without Ground Truth Data. Remote Sens. 2020, 12, 1099. [Google Scholar] [CrossRef] [Green Version]
- Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Q.; Du, B.; Huang, X.; Tang, Y.Y.; Tao, D. Simultaneous spectral-spatial feature selection and extraction for hyperspectral images. IEEE Trans. Cybern. 2016, 48, 16–28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, H.; Xiang, S.; Zhong, Z.; Ding, K.; Pan, C. Multicluster spatial–spectral unsupervised feature selection for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1660–1664. [Google Scholar]
- Solberg, A.S.; Jain, A.K. Texture fusion and feature selection applied to SAR imagery. IEEE Trans. Geosci. Remote Sens. 1997, 35, 475–479. [Google Scholar] [CrossRef]
Data | Resolution (m) | #Bands | Size (pixel) | Wavelength (nm) |
---|---|---|---|---|
Multispectral-Abudhabi | 10 | 13 | 401 × 401 | 356–1058 |
Multispectral-Saclay | 10 | 13 | 260 × 270 | 356–1058 |
Data | Resolution (m) | #Bands | Size (pixel) | Wavelength (nm) |
---|---|---|---|---|
Hyperspectral-River | 30 | 154 | 436 × 241 | 356–1058 |
Hyperspectral-Farmland | 30 | 154 | 306 × 241 | 356–1058 |
Dataset | Range Resolution (m) | Azimuth Resolution (m) | Incidence Angles (°) | Size (pixel) | Wavelength (cm) |
---|---|---|---|---|---|
PolSAR-San Francisco1 | 1.66 | 1.00 | [25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65] | 200 × 200 | 23.84 |
PolSAR-San Francisco2 | 1.66 | 1.00 | [25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65] | 100 × 100 | 23.84 |
Confusion Matrix | Predicted | ||
---|---|---|---|
Change | No-Change | ||
Actual | Change | TP | FN |
No-Change | FP | TN |
Accuracy Index | Formula |
---|---|
OA | |
Precision | |
Sensitivity | |
Specificity | |
BA | |
F1-Score | |
MD | |
FA | |
KC |
Dataset | Class | Total Number of Pixels | Number of Samples | Percentage (%) | Training | Validation | Testing |
---|---|---|---|---|---|---|---|
Multispectral-Abudhabi | Change | 11,519 | 2700 | 23.43 | 1755 | 405 | 540 |
No-Change | 148,481 | 7500 | 5.05 | 4875 | 1125 | 1500 | |
Multispectral-Saclay | Change | 1610 | 500 | 31.06 | 325 | 75 | 100 |
No-Change | 68,590 | 4900 | 7.14 | 3185 | 735 | 980 | |
Hyperspectral-River | Change | 9698 | 2200 | 22.68 | 1430 | 330 | 440 |
No-Change | 101,885 | 5000 | 5.21 | 3250 | 750 | 1000 | |
Hyperspectral-Farmland | Change | 14,288 | 2000 | 14.00 | 1300 | 300 | 400 |
No-Change | 59,458 | 2500 | 5.04 | 1625 | 375 | 500 | |
PolSAR-San Francisco1 | Change | 3434 | 810 | 23.58 | 527 | 121 | 162 |
No-Change | 36,566 | 2400 | 6.56 | 1560 | 360 | 480 | |
PolSAR-San Francisco2 | Change | 1259 | 400 | 31.77 | 260 | 60 | 80 |
No-Change | 8741 | 700 | 8.00 | 455 | 105 | 140 |
Parameter | Bounding | Selected Value |
---|---|---|
Initial learning | {10−3, 10−4} | 10−4 |
Epsilon value | {10−5, 10−9} | 10−9 |
Number of epochs | {550,750,950} | 750 |
Mini-batch size | {100,150} | 100 |
Dropout rate | {0.1,0.2} | 0.1 |
Patch size | {11,13} | 11 |
Weight initializer | {Random, Glorot} | Glorot |
Number of neurons at first full connected layer | {150,250} | 150 |
Number of neurons at second full connected layer | {15,25} | 15 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 80.54 | 91.80 | 88.08 | 91.79 | 92.54 | 91.73 | 98.89 |
Sensitivity (%) | 61.94 | 9.10 | 45.85 | 10.30 | 16.82 | 0.59 | 96.65 |
MD (%) | 38.06 | 90.90 | 54.15 | 89.70 | 83.18 | 99.41 | 3.35 |
FA (%) | 18.01 | 1.78 | 8.64 | 1.88 | 1.59 | 1.20 | 0.93 |
F1-Score (%) | 31.43 | 13.78 | 35.65 | 15.31 | 24.51 | 1.02 | 92.63 |
BA (%) | 71.96 | 53.66 | 68.61 | 54.21 | 57.62 | 49.70 | 97.86 |
Precision (%) | 21.06 | 28.39 | 29.17 | 29.81 | 45.14 | 3.68 | 88.93 |
Specificity (%) | 81.99 | 98.22 | 91.36 | 98.12 | 98.41 | 98.80 | 99.07 |
KC | 0.231 | 0.106 | 0.294 | 0.120 | 0.214 | 0.00 | 0.920 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 94.09 | 91.05 | 92.55 | 91.1 | 92.48 | 98.15 | 99.18 |
Sensitivity (%) | 19.50 | 42.48 | 19.25 | 40.56 | 31.06 | 29.19 | 75.40 |
MD (%) | 80.50 | 57.52 | 80.75 | 59.44 | 68.94 | 70.81 | 24.60 |
FA (%) | 4.16 | 7.81 | 5.72 | 7.70 | 6.07 | 0.23 | 0.25 |
F1-Score (%) | 13.15 | 17.88 | 10.61 | 17.31 | 15.94 | 42.02 | 80.99 |
BA (%) | 57.67 | 67.34 | 56.77 | 66.43 | 62.49 | 64.48 | 87.58 |
Precision (%) | 9.92 | 11.32 | 7.32 | 11.00 | 10.72 | 74.96 | 87.46 |
Specificity (%) | 95.84 | 92.19 | 94.28 | 92.30 | 93.93 | 99.77 | 99.75 |
KC | 0.104 | 0.148 | 0.075 | 0.142 | 0.128 | 0.413 | 0.805 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 96.41 | 93.51 | 96.75 | 87.13 | 92.94 | 95.02 | 97.50 |
Sensitivity (%) | 77.76 | 36.52 | 77.35 | 59.89 | 70.21 | 76.47 | 81.66 |
MD (%) | 22.23 | 63.47 | 22.64 | 40.10 | 29.78 | 23.52 | 18.33 |
FA (%) | 1.81 | 1.06 | 1.40 | 10.27 | 4.89 | 3.21 | 0.99 |
F1-Score (%) | 79.04 | 49.46 | 80.56 | 44.73 | 63.36 | 72.76 | 85.06 |
BA (%) | 87.98 | 67.73 | 87.97 | 74.81 | 82.65 | 86.63 | 90.34 |
Precision (%) | 80.37 | 76.63 | 84.05 | 35.69 | 57.72 | 69.39 | 88.74 |
Specificity (%) | 98.19 | 98.94 | 98.60 | 89.72 | 95.10 | 96.78 | 99.01 |
KC | 0.771 | 0.464 | 0.788 | 0.379 | 0.595 | 0.700 | 0.837 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 95.93 | 85.81 | 96.06 | 91.23 | 94.87 | 94.91 | 97.15 |
Sensitivity (%) | 85.55 | 47.47 | 82.13 | 86.91 | 76.82 | 82.84 | 90.78 |
MD (%) | 14.44 | 52.52 | 17.86 | 13.08 | 23.17 | 17.15 | 9.21 |
FA (%) | 1.59 | 5.04 | 0.619 | 7.73 | 0.824 | 2.21 | 1.329 |
F1-Score (%) | 89.01 | 56.31 | 88.92 | 79.25 | 85.22 | 86.23 | 92.46 |
BA (%) | 91.97 | 71.21 | 90.75 | 89.58 | 87.99 | 90.31 | 94.72 |
Precision (%) | 92.76 | 69.22 | 96.94 | 72.83 | 95.69 | 89.91 | 94.21 |
Specificity (%) | 98.40 | 94.96 | 99.38 | 92.26 | 99.17 | 97.78 | 98.67 |
KC | 0.865 | 0.482 | 0.865 | 0.738 | 0.821 | 0.831 | 0.907 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 91.74 | 92.17 | 91.36 | 91.37 | 95.64 | 95.62 | 98.31 |
Sensitivity (%) | 44.11 | 52.62 | 27.57 | 22.56 | 71.75 | 64.00 | 93.06 |
MD (%) | 55.88 | 47.37 | 72.42 | 77.43 | 28.24 | 35.99 | 6.93 |
FA (%) | 3.78 | 4.11 | 2.64 | 2.16 | 2.11 | 1.41 | 1.19 |
F1-Score (%) | 47.85 | 53.58 | 35.41 | 30.98 | 73.88 | 71.49 | 90.43 |
BA (%) | 70.16 | 74.25 | 62.46 | 60.19 | 84.82 | 81.29 | 95.93 |
Precision (%) | 52.27 | 54.57 | 49.47 | 49.42 | 76.14 | 80.95 | 87.94 |
Specificity (%) | 96.21 | 95.88 | 97.35 | 97.83 | 97.88 | 98.58 | 98.80 |
KC | 0.434 | 0.493 | 0.311 | 0.271 | 0.715 | 0.691 | 0.895 |
Method | CVA | MAD | PCA | IR-MAD | SFA | 3D-CNN | Proposed Method |
---|---|---|---|---|---|---|---|
OA (%) | 88.28 | 89.68 | 89.76 | 89.23 | 91.73 | 93.87 | 95.89 |
Sensitivity (%) | 9.85 | 22.16 | 26.93 | 17.95 | 40.98 | 57.27 | 79.03 |
MD (%) | 90.15 | 77.84 | 73.07 | 82.05 | 59.02 | 42.73 | 20.97 |
FA (%) | 0.42 | 0.59 | 1.19 | 0.50 | 0.96 | 0.86 | 1.68 |
F1-Score (%) | 17.46 | 35.09 | 39.84 | 29.56 | 55.51 | 70.17 | 82.88 |
BA (%) | 54.71 | 60.78 | 62.87 | 58.72 | 70.01 | 78.20 | 88.67 |
Precision (%) | 77.02 | 84.29 | 76.52 | 83.70 | 86.00 | 90.58 | 87.13 |
Specificity (%) | 99.57 | 99.40 | 98.81 | 99.50 | 99.04 | 99.14 | 98.32 |
KC | 0.150 | 0.315 | 0.356 | 0.263 | 0.515 | 0.669 | 0.805 |
Method | Hyperspectral- River [29] | Hyperspectral-Farmland [3] | Proposed Method Hyperspectral-River | Proposed Method Hyperspectral-Farmland |
---|---|---|---|---|
OA (%) | 95.14 | 92.32 | 97.50 | 97.15 |
KC | 0.754 | 0.818 | 0.837 | 0.907 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Seydi, S.T.; Hasanlou, M.; Amani, M. A New End-to-End Multi-Dimensional CNN Framework for Land Cover/Land Use Change Detection in Multi-Source Remote Sensing Datasets. Remote Sens. 2020, 12, 2010. https://doi.org/10.3390/rs12122010
Seydi ST, Hasanlou M, Amani M. A New End-to-End Multi-Dimensional CNN Framework for Land Cover/Land Use Change Detection in Multi-Source Remote Sensing Datasets. Remote Sensing. 2020; 12(12):2010. https://doi.org/10.3390/rs12122010
Chicago/Turabian StyleSeydi, Seyd Teymoor, Mahdi Hasanlou, and Meisam Amani. 2020. "A New End-to-End Multi-Dimensional CNN Framework for Land Cover/Land Use Change Detection in Multi-Source Remote Sensing Datasets" Remote Sensing 12, no. 12: 2010. https://doi.org/10.3390/rs12122010