Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network
"> Figure 1
<p>Location of the study site, RGB image of the VENµS data acquired in July and ground truth data.</p> "> Figure 2
<p>Flowchart of the methodology. The black arrow denotes data processing; the blue indicates means training of architecture and hyperparameters; the red arrow represents training and cross validation of the optimized classifier; the green arrow indicates the final classification map generation and accuracy assessment using the trained classifiers and ground truth data.</p> "> Figure 3
<p>Proposed architecture of Conv 1D. The network input is a feature series of one pixel. Three convolutional layers are consecutively applied, then three dense layers, and the output layer, that provides the predicting class.</p> "> Figure 4
<p>Classification maps of the four classifiers (<b>a</b>) Conv1D, (<b>b</b>) MLP, (<b>c</b>) LSTM, (<b>d</b>) SVM, (<b>e</b>) XGBoost, (<b>f</b>) RF using the MNF transformation of RADARSAT-2 Pauli decomposition and VENµS data. Misclassifications mainly exist between Wheat and Forage, as well as Soybean and Other.</p> ">
Abstract
:1. Introduction
2. Materials
2.1. Study Site
2.2. Ground Truth Data
2.3. Remote Sensing Data
2.3.1. RADARSAT-2 Data
2.3.2. VENµS Data
2.4. Data Preparation
3. Methods
3.1. Neural Network Classifiers
3.2. Other Classifiers
3.3. Evaluation
4. Results
4.1. Overall Classification Accuracy and Training Time
4.2. Classification Accuracy of Individual Land Cover Class
5. Discussions
5.1. Comparisons between Conv1D and Other Classifiers
5.2. Influence of Input Features on the Performance of Conv1D
5.3. Extended Experiments on Datasets with Different Number of RADARSAT-2 and VENµS Data
5.4. Future Work
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Sonobe, R.; Tani, H.; Wang, X.; Kobayashi, N.; Shimamura, H. Random forest classification of crop type using multi-temporal TerraSAR-X dual-polarimetric data. Remote Sens. Lett. 2014, 5, 157–164. [Google Scholar] [CrossRef] [Green Version]
- Huang, X.; Wang, J.; Shang, J.; Liao, C.; Liu, J. Application of polarization signature to land cover scattering mechanism analysis and classification using multi-temporal C-band polarimetric RADARSAT-2 imagery. Remote Sens. Environ. 2017, 193, 11–28. [Google Scholar] [CrossRef]
- Xie, Q.; Wang, J.; Liao, C.; Shang, J.; Lopez-sanchez, J.M. On the use of Neumann decomposition for crop classification using multi-temporal RADARSAT-2 polarimetric SAR data. Remote Sens. 2019, 11, 776. [Google Scholar] [CrossRef] [Green Version]
- Skriver, H. Crop classification by multitemporal C- and L-band single- and dual-polarization and fully polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2138–2149. [Google Scholar] [CrossRef]
- McNairn, H.; Champagne, C.; Shang, J.; Holmstrom, D.; Reichert, G. Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories. ISPRS J. Photogramm. Remote Sens. 2009, 64, 434–449. [Google Scholar] [CrossRef]
- Forkuor, G.; Conrad, C.; Thiel, M.; Ullmann, T.; Zoungrana, E. Integration of optical and synthetic aperture radar imagery for improving crop mapping in northwestern Benin, West Africa. Remote Sens. 2014, 6, 6472–6499. [Google Scholar] [CrossRef] [Green Version]
- Inglada, J.; Vincent, A.; Arias, M.; Marais-Sicre, C. Improved early crop type identification by joint use of high temporal resolution SAR and optical image time series. Remote Sens. 2016, 8, 362. [Google Scholar] [CrossRef] [Green Version]
- Zhou, T.; Pan, J.; Zhang, P.; Wei, S.; Han, T. Mapping winter wheat with multi-temporal SAR and optical images in an urban agricultural region. Sensors 2017, 17, 1210. [Google Scholar] [CrossRef]
- Pelletier, C.; Webb, G.I. Temporal convolutional neural network for the classification of satellite image time series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef] [Green Version]
- Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.A.; et al. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef] [Green Version]
- Sukawattanavijit, C.; Chen, J. Fusion of multi-frequency sar data with thaichote optical imagery for maize classification in thailand. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) IEEE, Milan, Italy, 26–31 July 2015; pp. 617–620. [Google Scholar]
- Manaf, S.A. Fusion of optical and SAR in extracting shoreline at northeast coast of peninsular Malaysia. In Proceedings of the 36th Asian Conference on Remote Sensing: Fostering Resilient Growth in Asia, ACRS, Quezon City, Philippines, 19–23 October 2015. [Google Scholar]
- Basuki, T.M.; Skidmore, A.K.; Hussin, Y.A.; Van Duren, I.; Mutiara, T.; Skidmore, A.K.; Hussin, Y.A.; Van, I. Estimating tropical forest biomass more accurately by integrating ALOS PALSAR and Landsat-7 ETM + data. Int. J. Remote Sens. 2013, 34, 4871–4888. [Google Scholar] [CrossRef] [Green Version]
- Mahyoub, S.; Fadil, A.; Mansour, E.M.; Rhinane, H.; Al-nahmi, F. Fusing of optical and Synthetic Aperture Radar (SAR) remote sensing data: A systematic literature review. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Casablanca, Morocco, 10–11 October 2018; Volume XLII, pp. 127–138. [Google Scholar]
- Liao, C.; Wang, J.; Huang, X.; Shang, J. Contribution of minimum noise fraction transformation of multi-temporal radarsat-2 polarimetric sar data to cropland classification. Can. J. Remote Sens. 2018, 44, 1–17. [Google Scholar] [CrossRef]
- Green, A.A.; Berman, M.; Switzer, P.; Graig, M.D. A transformation for ordering multispectral data in term of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef] [Green Version]
- Ndikumana, E.; Ho, D.; Minh, T.; Baghdadi, N.; Courault, D.; Hossard, L. Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Sun, Z.; Di, L.; Fang, H. Using long short-term memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series. Int. J. Remote Sens. 2019, 40, 593–614. [Google Scholar] [CrossRef]
- Minh, D.H.T.; Ienco, D.; Gaetano, R.; Lalande, N.; Ndikumana, E.; Osman, F.; Maurel, P. Deep Recurrent Neural Networks for Winter Vegetation Quality Mapping via Multitemporal SAR Sentinel-1. IEEE Geosci. Remote Sens. Lett. 2018, 15, 464–468. [Google Scholar] [CrossRef]
- Hu, Y.; Zhang, Q.; Zhang, Y.; Yan, H. A ddeep convolution neural network method for land cover mapping: A case study of Qinhuangdao, China. Remote Sens. 2018, 10, 2053. [Google Scholar] [CrossRef] [Green Version]
- Lim, K.; Jin, D.; Kim, C. Change detection in high resolution satellite images using an ensemble of convolutional neural networks. In Proceedings of the APSIPA Annual Summit and Conference 2018, Honolulu, HI, USA, 12–15 November 2018; pp. 509–515. [Google Scholar]
- Ji, S.; Wei, S.; Lu, M. A scale robust convolutional neural network for automatic building extraction from aerial and satellite imagery. Int. J. Remote Sens. 2019, 40, 3308–3322. [Google Scholar] [CrossRef]
- Audebert, N.; Le Saux, B.; Lefevre, S. Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images. Remote Sens. 2017, 9, 368. [Google Scholar] [CrossRef] [Green Version]
- Qin, Y.; Wu, Y.; Li, B.; Gao, S.; Liu, M.; Zhan, Y. Semantic segmentation of building roof in dense urban environment with deep convolutional neural network: A case study using GF2 VHR imagery in China. Sensors 2019, 19, 1164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef] [Green Version]
- Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
- Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of three deep learning models for early crop classification using Sentinel-1A imagery time series—A case study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
- Jiao, X.; Kovacs, J.M.; Shang, J.; McNairn, H.; Walters, D.; Ma, B.; Geng, X. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data. ISPRS J. Photogramm. Remote Sens. 2014, 96, 38–46. [Google Scholar] [CrossRef]
- Huang, X.; Liao, C.; Xing, M.; Ziniti, B.; Wang, J.; Shang, J.; Liu, J.; Dong, T.; Xie, Q.; Torbick, N. A multi-temporal binary-tree classification using polarimetric RADARSAT-2 imagery. Remote Sens. Environ. 2019, 235, 111478. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Available online: http://adsabs.harvard.edu/abs/2014arXiv1409.1556S (accessed on 19 June 2019).
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
- Keras: The Python Deep Learning Library. Available online: https://keras.io/ (accessed on 2 April 2019).
- Tensorflow: An Open Source Software Library for High Performance Numerical Computation. Available online: https://www.tensorflow.org (accessed on 2 April 2019).
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Lawrence, R.L.; Wood, S.D.; Sheley, R.L. Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (Random Forest). Remote Sens. Environ. 2006, 100, 356–362. [Google Scholar] [CrossRef]
- Na, X.; Zhang, S.; Li, X.; Yu, H.; Liu, C. Improved land cover mapping using random forests combined with Landsat Thematic Mapper imagery and ancillary geographic data. Photogramm. Eng. Remote Sens. 2010, 76, 833–840. [Google Scholar] [CrossRef]
- Lee, J.-S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009; ISBN 9781420054972. [Google Scholar]
- Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef] [Green Version]
- Cloude, S.R.; Pottier, E. A review of target decomposition theorems in radar polarimetry. IEEE Trans. Geosci. Remote Sens. 1996, 34, 498–518. [Google Scholar] [CrossRef]
Land Cover | Field Polygons | Training Pixels | Validation Pixels | Testing Pixels |
---|---|---|---|---|
Soybean (Purple) | 80 | 60,000 | 20,000 | 20,000 |
Corn (Coral) | 104 | 66,000 | 22,000 | 22,000 |
Winter Wheat (Blue) | 20 | 11,400 | 3800 | 3800 |
Forage (Green) | 29 | 9000 | 3000 | 3000 |
Forest (Dark Green) | 13 | 7500 | 2500 | 2500 |
Built-up (Yellow) | 5 | 1440 | 480 | 480 |
Other (Cyan) | 10 | 1110 | 370 | 370 |
Date | Beam Mode | Incidence Angle | Orbit |
---|---|---|---|
2018-07-01 | FQ10W | 28.4–31.6 | ASC |
2018-07-08 | FQ5W | 22.5–26.0 | ASC |
2018-07-25 | FQ10W | 28.4–31.6 | ASC |
2018-08-01 | FQ5W | 22.5–26.0 | ASC |
2018-08-18 | FQ10W | 28.4–31.6 | ASC |
2018-08-25 | FQ5W | 22.5–26.0 | ASC |
2018-09-01 | FQ1W | 17.5–21.2 | ASC |
2018-09-08 | FQ4W | 21.3–24.8 | DES |
2018-09-15 | FQ9W | 27.2–30.5 | DES |
2018-10-05 | FQ10W | 28.4–31.6 | ASC |
Classifier | Parameters | Candidate Values | Selected Values |
---|---|---|---|
XGBoost | Learning rate | 0.01, 0.015, 0.05, 0.1 | 0.1 |
Gamma | 0, 0.05, 0.1, 0.5 | 0 | |
Max_depth | 3, 5, 7, 9 | 3 | |
Min_child_weight | 1, 3, 5, 7 | 1 | |
Colsample_bytree | 0.7, 0.8, 0.9, 1 | 1 | |
Subsample | 0.7, 0.8, 0.9, 1 | 1 | |
N_estimators | 50, 100, 150, 200 | 100 | |
Reg_lamda | 0.01, 0.1, 1 | 1 | |
Reg_alpha | 0, 0.1, 0.5, 1 | 0 | |
RF | N_estimators | 50, 100, 150, 200 | 100 |
Max_depth | 5, 10, 15, 20, None | None | |
Min_samples_split | 2, 5, 10, 100 | 2 | |
Min_samples_leaf | 1, 2, 5, 10 | 1 | |
Max_features | ‘log2’, ‘sqrt’, None | ‘sqrt’ | |
SVM | Kernel function | ‘linear’, ‘poly’, ‘rbf’ | ‘linear’ |
C | 0.1, 1, 10, 100 | 1 | |
Gamma | ‘scale’, ‘auto’, 0.1, 1, 10 | ‘scale’ |
Classifier | Parameters | Values | |
---|---|---|---|
Conv1D | Convolution Layers | 2, 3, 6, 8, 10 | 3 |
Learning rate | 0.01, 0.001, 1.5 × 10−5, 1.5 × 10−4, 1.0 × 10−4 | 1.5 × 10−4 | |
Epoch | 10, 20, 30 | 20 | |
Batch size | 32, 320, 3200 | 3200 | |
Dropout | 0, 0.2, 0.5, 0.8 | 0.5 | |
MLP | Layers | 1, 2, 3, 4, 5, 6 | 5 |
Nodes each layer | 64, 128, 256, 512, 1024 | 512 | |
Learning rate | 0.01, 0.001, 1.5 × 10−5, 1.5 × 10−4, 1.0 × 10−4 | 1.5 × 10−4 | |
Epoch | 10, 20, 50, 100 | 20 | |
Batch size | 32, 320, 3200 | 3200 | |
Dropout | 0, 0.2, 0.5, 0.8 | 0.5 | |
LSTM | Layers | 1, 2, 3, 4 | 3 |
Nodes each layer | 32, 64, 128, 256, 512 | 256 | |
Epoch | 10, 20, 30 | 10 | |
Batch size | 32, 320, 3200 | 320 | |
Learning rate | 0.001, 1.0 × 10−4, 1.5 × 10−4, 1.5 × 10−5 | 1.5 × 10−4 | |
Dropout | 0, 0.2, 0.5, 0.8 | 0.5 |
Scenario | Input Data |
---|---|
Scenario 1 | Coherency matrix |
Scenario 2 | Pauli decomposition |
Scenario 3 | linear polarization |
Scenario 4 | Cloude-Pottier decomposition |
Scenario 5 | Freeman-Durden decomposition |
Scenario 6 | VENµS |
Scenario 7 | Coherency matrix+ VENµS |
Scenario 8 | MNF (Coherency matrix+ VENµS) |
Scenario 9 | Pauli+ VENµS |
Scenario 10 | MNF (Pauli+ VENµS) |
OA (%) | ||||||
Conv1D | MLP | LSTM | XGBoost | RF | SVM | |
Scenario 1 | 91.85 ± 2.51 | 91.48 ± 2.38 | 85.75 ± 1.22 | 91.65 ± 2.21 | 91.55 ± 2.38 | 90.87 ± 2.12 |
Scenario 2 | 92.46 ± 2.54 | 92.47 ± 2.24 | 87.13 ± 2.02 | 92.95 ± 2.24 | 93.05 ± 2.36 | 91.50 ± 2.31 |
Scenario 3 | 90.40 ± 2.88 | 90.70 ± 2.89 | 83.05 ± 7.26 | 90.84 ± 2.93 | 91.17 ± 3.21 | 89.54 ± 2.82 |
Scenario 4 | 90.36 ± 2.42 | 86.97 ± 2.16 | 88.37 ± 2.55 | 89.01 ± 2.07 | 89.22 ± 1.89 | 87.50 ± 1.36 |
Scenario 5 | 88.68 ± 2.07 | 58.60 ± 1.97 | 90.03 ± 3.79 | 91.99 ± 2.55 | 91.59 ± 2.21 | 89.62 ± 3.21 |
Scenario 6 | 93.15 ± 2.06 | 91.42 ± 3.01 | 90.02 ± 0.65 | 92.34 ± 1.88 | 92.63 ± 2.07 | 93.23 ± 1.76 |
Scenario 7 | 94.71 ± 1.85 | 94.84 ± 1.39 | 94.11 ± 0.42 | 95.23 ± 1.81 | 94.43 ± 2.26 | 95.30 ± 0.92 |
Scenario 8 | 96.65 ± 1.03 | 95.67 ± 2.17 | 93.82 ± 0.77 | 94.82 ± 1.57 | 95.39 ± 1.71 | 95.90 ± 0.69 |
Scenario 9 | 95.31 ± 1.78 | 94.58 ± 1.38 | 92.91 ± 0.59 | 95.38 ± 1.72 | 95.09 ± 1.70 | 95.23 ± 1.14 |
Scenario 10 | 96.72 ± 0.77 | 96.14 ± 1.47 | 93.65 ± 2.47 | 95.45 ± 1.29 | 95.92 ± 1.12 | 96.06 ± 0.86 |
Kappa | ||||||
Conv1D | MLP | LSTM | XGBoost | RF | SVM | |
Scenario 1 | 0.87 ± 0.04 | 0.87 ± 0.04 | 0.79 ± 0.12 | 0.87 ± 0.03 | 0.87 ± 0.04 | 0.86 ± 0.03 |
Scenario 2 | 0.89 ± 0.04 | 0.89 ± 0.03 | 0.81 ± 0.03 | 0.89 ± 0.03 | 0.90 ± 0.04 | 0.87 ± 0.03 |
Scenario 3 | 0.86 ± 0.04 | 0.86 ± 0.04 | 0.74 ± 0.11 | 0.86 ± 0.04 | 0.87 ± 0.05 | 0.84 ± 0.04 |
Scenario 4 | 0.85 ± 0.04 | 0.80 ± 0.03 | 0.83 ± 0.04 | 0.83 ± 0.03 | 0.84 ± 0.03 | 0.81 ± 0.02 |
Scenario 5 | 0.83 ± 0.03 | 0.31 ± 0.03 | 0.85 ± 0.06 | 0.88 ± 0.04 | 0.87 ± 0.03 | 0.84 ± 0.05 |
Scenario 6 | 0.90 ± 0.03 | 0.87 ± 0.05 | 0.85 ± 0.01 | 0.88 ± 0.03 | 0.89 ± 0.03 | 0.90 ± 0.03 |
Scenario 7 | 0.92 ± 0.03 | 0.92 ± 0.02 | 0.91 ± 0.01 | 0.93 ± 0.03 | 0.92 ± 0.03 | 0.93 ± 0.01 |
Scenario 8 | 0.95 ± 0.02 | 0.93 ± 0.03 | 0.91 ± 0.01 | 0.92 ± 0.02 | 0.93 ± 0.03 | 0.94 ± 0.01 |
Scenario 9 | 0.93 ± 0.03 | 0.92 ± 0.02 | 0.89 ± 0.01 | 0.93 ± 0.03 | 0.93 ± 0.03 | 0.93 ± 0.02 |
Scenario 10 | 0.95 ± 0.01 | 0.94 ± 0.02 | 0.90 ± 0.04 | 0.93 ± 0.02 | 0.94 ± 0.02 | 0.94 ± 0.01 |
Training time (s) | ||||||
Conv1D | MLP | LSTM | XGBoost | RF | SVM | |
Scenario 1 | 707.35 | 51.00 | 2160.15 | 3588.45 | 1231.10 | 3977.45 |
Scenario 2 | 269.70 | 50.50 | 876.10 | 1204.55 | 561.20 | 1499.00 |
Scenario 3 | 238.20 | 49.25 | 884.00 | 1204.10 | 555.35 | 2074.30 |
Scenario 4 | 192.60 | 61.45 | 877.25 | 1224.65 | 563.90 | 3628.15 |
Scenario 5 | 249.65 | 37.45 | 888.00 | 1129.25 | 467.65 | 1837.10 |
Scenario 6 | 281.65 | 57.55 | 787.10 | 530.30 | 127.25 | 1584.85 |
Scenario 7 | 657.20 | 50.80 | 2666.50 | 3764.60 | 969.20 | 1838.75 |
Scenario 8 | 348.55 | 47.50 | 2675.55 | 4352.60 | 1007.20 | 63,082.15 |
Scenario 9 | 345.80 | 45.35 | 1408.05 | 1638.05 | 526.15 | 1190.90 |
Scenario 10 | 175.50 | 40.25 | 1401.65 | 2075.60 | 689.50 | 21,200.10 |
Input | Soybean | Corn | Wheat | Forage | Forest | Built-Up | Other | |
---|---|---|---|---|---|---|---|---|
Conv1D | Scenario 1 | 93.50 ± 2.29 | 94.87 ± 1.74 | 86.17 ± 5.26 | 73.91 ± 9.71 | 86.83 ± 3.48 | 87.65 ± 13.48 | 47.93 ± 15.13 |
Scenario 2 | 93.99 ± 2.17 | 94.51 ± 2.15 | 89.00 ± 6.36 | 75.19 ± 10.89 | 93.04 ± 2.25 | 85.21 ± 15.11 | 63.36 ± 12.60 | |
Scenario 3 | 92.07 ± 2.67 | 93.07 ± 2.76 | 85.31 ± 6.21 | 70.46 ± 11.24 | 87.76 ± 11.24 | 87.32 ± 1.16 | 53.90 ± 20.18 | |
Scenario 4 | 90.51 ± 1.38 | 94.57 ± 1.88 | 80.38 ± 8.36 | 71.06 ± 9.27 | 63.83 ± 6.22 | 62.55 ± 30.37 | 0 | |
Scenario 5 | 92.14 ± 2.18 | 93.03 ± 2.08 | 88.09 ± 6.26 | 69.40 ± 8.34 | 87.74 ± 3.29 | 79.32 ± 13.98 | 28.44 ± 10.37 | |
Scenario 6 | 93.09 ± 1.84 | 95.87 ± 1.85 | 92.07 ± 9.63 | 77.72 ± 13.22 | 97.31 ± 1.22 | 98.91 ± 0.75 | 31.77 ± 18.93 | |
Scenario 7 | 95.12 ± 2.12 | 96.25 ± 1.17 | 94.97 ± 3.22 | 83.28 ± 9.22 | 97.93 ± 1.38 | 91.48 ± 13.64 | 54.30 ± 17.22 | |
Scenario 8 | 97.23 ± 0.66 | 97.60 ± 0.87 | 95.34 ± 5.00 | 87.89 ± 8.75 | 97.43 ± 2.72 | 98.49 ± 1.38 | 81.14 ± 9.60 | |
Scenario 9 | 95.39 ± 1.97 | 96.60 ± 1.42 | 95.11 ± 5.08 | 86.92 ± 7.57 | 98.23 ± 1.24 | 93.54 ± 11.17 | 59.53 ± 16.17 | |
Scenario 10 | 97.13 ± 0.72 | 97.51 ± 0.97 | 96.07 ± 4.38 | 88.56 ± 7.95 | 98.21 ± 2.08 | 96.10 ± 4.91 | 86.37 ± 9.14 | |
MLP | Scenario 1 | 93.26 ± 1.93 | 94.90 ± 1.39 | 84.66 ± 9.05 | 73.53 ± 10.96 | 86.28 ± 2.99 | 85.43 ± 13.92 | 2.74 ± 4.73 |
Scenario 2 | 93.71 ± 1.87 | 94.99 ± 1.70 | 89.55 ± 6.40 | 77.03 ± 10.49 | 92.10 ± 3.03 | 82.92 ± 16.17 | 27.31 ± 24.16 | |
Scenario 3 | 92.12 ± 2.50 | 93.51 ± 2.68 | 87.09 ± 5.96 | 72.58 ± 10.23 | 87.82 ± 1.29 | 85.47 ± 12.79 | 19.75 ± 25.85 | |
Scenario 4 | 88.79 ± 1.82 | 94.17 ± 2.19 | 81.82 ± 6.75 | 68.07 ± 9.59 | 40.42 ± 7.47 | 45.00 ± 27.11 | 0 | |
Scenario 5 | 62.65 ± 1.60 | 67.20 ± 3.59 | 14.83 ± 7.05 | 2.34 ± 1.29 | 1.40 ± 0.94 | 0 | 0 | |
Scenario 6 | 91.68 ± 3.07 | 94.65 ± 2.15 | 88.63 ± 8.71 | 70.03 ± 11.88 | 95.18 ± 2.86 | 99.20 ± 0.8 | 0 | |
Scenario 7 | 95.59 ± 1.80 | 96.37 ± 0.91 | 95.57 ± 3.43 | 86.39 ± 8.24 | 98.05 ± 1.24 | 91.11 ± 10.83 | 25.19 ± 24.91 | |
Scenario 8 | 96.11 ± 1.91 | 97.18 ± 1.23 | 94.47 ± 6.37 | 86.00 ± 11.12 | 97.00 ± 3.10 | 94.71 ± 4.33 | 52.26 ± 23.31 | |
Scenario 9 | 94.81 ± 1.66 | 96.42 ± 1.23 | 94.62 ± 3.67 | 85.08 ± 6.57 | 98.00 ± 0.93 | 92.12 ± 13.26 | 20.13 ± 13.66 | |
Scenario 10 | 96.54 ± 1.49 | 97.35 ± 1.07 | 95.69 ± 4.39 | 87.35 ± 8.58 | 97.48 ± 2.05 | 95.73 ± 3.86 | 71.79 ± 10.71 | |
LSTM | Scenario 1 | 87.64 ± 1.63 | 95.46 ± 0.20 | 69.94 ± 1.59 | 71.91 ± 1.95 | 70.57 ± 1.94 | 77.99 ± 14.79 | 17.82 ± 3.55 |
Scenario 2 | 88.58 ± 2.76 | 95.12 ± 0.27 | 74.28 ± 1.26 | 70.10 ± 1.23 | 80.06 ± 1.92 | 91.78 ± 3.89 | 23.43 ± 8.12 | |
Scenario 3 | 87.82 ± 5.89 | 88.45 ± 7.63 | 62.42 ± 15.57 | 45.19 ± 18.08 | 76.27 ± 5.61 | 69.90 ± 21.65 | 47.46 ± 18.36 | |
Scenario 4 | 89.32 ± 2.65 | 94.40 ± 1.67 | 83.54 ± 10.72 | 75.80 ± 5.47 | 66.96 ± 5.45 | 76.11 ± 13.96 | 40.05 ± 18.78 | |
Scenario 5 | 91.77 ± 3.56 | 92.52 ± 4.32 | 84.89 ± 9.49 | 72.24 ± 5.48 | 88.40 ± 1.89 | 79.08 ± 18.31 | 66.67 ± 13.46 | |
Scenario 6 | 87.81 ± 0.82 | 90.58 ± 0.35 | 96.80 ± 0.95 | 88.31 ± 2.44 | 96.75 ± 1.02 | 98.43 ± 0.08 | 23.44 ± 12.99 | |
Scenario 7 | 95.70 ± 0.25 | 96.21 ± 0.26 | 93.21 ± 0.98 | 66.09 ± 9.67 | 93.56 ± 2.54 | 99.79 ± 0.22 | 49.76 ± 5.98 | |
Scenario 8 | 95.20 ± 0.47 | 97.48 ± 0.65 | 96.92 ± 1.55 | 57.80 ± 3.91 | 90.39 ± 1.36 | 86.08 ± 5.71 | 32.15 ± 4.12 | |
Scenario 9 | 92.50 ± 0.79 | 94.79 ± 0.41 | 97.31 ± 0.64 | 87.21 ± 3.21 | 98.32 ± 0.29 | 89.54 ± 10.92 | 43.10 ± 13.12 | |
Scenario 10 | 92.81 ± 3.41 | 94.47 ± 2.53 | 98.69 ± 0.92 | 94.06 ± 1.97 | 92.86 ± 2.40 | 92.55 ± 2.35 | 54.72 ± 3.95 | |
XGBoost | Scenario 1 | 93.29 ± 1.99 | 94.64 ± 1.88 | 86.94 ± 8.70 | 71.52 ± 2.11 | 86.73 ± 2.11 | 87.12 ± 18.89 | 57.79 ± 9.46 |
Scenario 2 | 94.19 ± 1.71 | 95.03 ± 1.95 | 90.26 ± 5.18 | 76.91 ± 9.51 | 91.65 ± 1.90 | 85.53 ± 19.40 | 65.56 ± 11.04 | |
Scenario 3 | 92.53 ± 2.63 | 93.61 ± 2.74 | 86.45 ± 6.12 | 70.64 ± 9.46 | 87.07 ± 2.11 | 86.19 ± 18.76 | 58.99 ± 11.80 | |
Scenario 4 | 90.47 ± 1.13 | 94.48 ± 1.93 | 82.90 ± 10.90 | 71.78 ± 12.71 | 61.52 ± 6.85 | 67.25 ± 31.69 | 40.27 ± 9.44 | |
Scenario 5 | 93.33 ± 1.93 | 94.05 ± 1.94 | 90.84 ± 8.07 | 72.78 ± 7.08 | 89.95 ± 1.36 | 84.20 ± 17.79 | 60.98 ± 10.49 | |
Scenario 6 | 92.16 ± 2.00 | 94.44 ± 1.62 | 90.75 ± 8.73 | 77.68 ± 10.07 | 97.36 ± 1.95 | 98.58 ± 1.54 | 69.15 ± 5.54 | |
Scenario 7 | 95.44 ± 1.72 | 96.29 ± 1.48 | 95.27 ± 5.06 | 85.54 ± 9.5 | 98.39 ± 1.40 | 93.61 ± 9.84 | 72.00 ± 13.98 | |
Scenario 8 | 95.61 ± 0.78 | 96.84 ± 0.89 | 91.36 ± 8.38 | 81.31 ± 11.16 | 96.57 ± 2.44 | 90.46 ± 16.55 | 69.32 ± 17.62 | |
Scenario 9 | 95.67 ± 1.49 | 96.40 ± 1.48 | 94.97 ± 5.59 | 85.46 ± 9.16 | 98.65 ± 1.42 | 94.37 ± 9.94 | 75.99 ± 11.46 | |
Scenario 10 | 95.75 ± 1.14 | 96.93 ± 0.95 | 94.93 ± 3.20 | 83.29 ± 9.10 | 96.84 ± 2.13 | 93.24 ± 12.01 | 79.12 ± 10.18 | |
RF | Scenario 1 | 92.74 ± 2.02 | 94.48 ± 2.08 | 87.49 ± 6.70 | 73.19 ± 8.83 | 86.10 ± 2.76 | 89.69 ± 18.07 | 59.82 ± 13.94 |
Scenario 2 | 94.14 ± 1.84 | 95.02 ± 2.00 | 90.39 ± 6.98 | 77.79 ± 10.37 | 91.55 ± 2.10 | 89.40 ± 17.79 | 71.92 ± 11.25 | |
Scenario 3 | 92.47 ± 2.86 | 93.68 ± 2.96 | 87.54 ± 7.30 | 72.68 ± 9.83 | 86.98 ± 2.17 | 90.03 ± 16.95 | 70.53 ± 16.17 | |
Scenario 4 | 90.46 ± 1.17 | 94.62 ± 2.01 | 83.47 ± 10.89 | 73.74 ± 9.64 | 60.52 ± 6.05 | 65.36 ± 33.79 | 37.63 ± 9.76 | |
Scenario 5 | 93.16 ± 1.81 | 93.72 ± 2.36 | 88.19 ± 7.20 | 72.17 ± 7.12 | 89.28 ± 1.51 | 89.51 ± 18.19 | 63.54 ± 11.08 | |
Scenario 6 | 92.82 ± 2.08 | 94.41 ± 1.80 | 93.09 ± 8.43 | 72.81 ± 15.67 | 97.64 ± 1.76 | 99.26 ± 0.52 | 77.56 ± 11.71 | |
Scenario 7 | 94.59 ± 2.09 | 95.52 ± 1.53 | 94.99 ± 6.21 | 81.82 ± 12.26 | 98.64 ± 1.22 | 96.87 ± 5.86 | 79.35 ± 10.13 | |
Scenario 8 | 95.48 ± 1.29 | 96.95 ± 0.78 | 93.27 ± 7.77 | 85.08 ± 10.10 | 97.87 ± 2.19 | 95.98 ± 6.91 | 64.03 ± 11.15 | |
Scenario 9 | 95.17 ± 1.83 | 96.17 ± 1.21 | 95.93 ± 3.78 | 83.37 ± 11.50 | 98.91 ± 1.15 | 96.00 ± 7.27 | 81.97 ± 9.39 | |
Scenario 10 | 96.10 ± 0.92 | 97.17 ± 0.73 | 95.76 ± 3.61 | 85.36 ± 10.14 | 97.98 ± 1.94 | 93.36 ± 11.56 | 79.06 ± 10.15 | |
SVM | Scenario 1 | 92.02 ± 2.29 | 94.34 ± 1.65 | 87.55 ± 5.39 | 75.21 ± 9.01 | 87.52 ± 3.95 | 87.17 ± 16.15 | 40.96 ± 18.71 |
Scenario 2 | 92.38 ± 2.50 | 94.65 ± 1.73 | 90.13 ± 5.65 | 76.41 ± 10.24 | 92.12 ± 2.54 | 84.90 ± 17.21 | 47.03 ± 19.76 | |
Scenario 3 | 90.64 ± 2.68 | 93.25 ± 2.37 | 86.45 ± 6.97 | 72.40 ± 9.57 | 87.37 ± 3.11 | 85.05 ± 17.20 | 35.51 ± 16.05 | |
Scenario 4 | 89.01 ± 1.06 | 94.30 ± 2.06 | 74.69 ± 9.83 | 71.02 ± 11.01 | 56.84 ± 6.20 | 67.08 ± 35.41 | 5.81 ± 11.61 | |
Scenario 5 | 90.99 ± 3.27 | 93.00 ± 2.79 | 86.82 ± 6.69 | 71.57 ± 8.77 | 90.12 ± 0.97 | 83.87 ± 18.20 | 45.90 ± 19.81 | |
Scenario 6 | 92.91 ± 1.70 | 95.80 ± 1.60 | 90.85 ± 8.51 | 79.91 ± 11.64 | 97.41 ± 0.89 | 99.38 ± 0.55 | 36.51 ± 17.09 | |
Scenario 7 | 95.68 ± 1.47 | 96.82 ± 1.06 | 95.45 ± 3.67 | 87.21 ± 7.52 | 98.51 ± 1.33 | 97.54 ± 4.55 | 56.38 ± 20.15 | |
Scenario 8 | 96.63 ± 0.76 | 97.31 ± 0.78 | 95.03 ± 3.47 | 82.92 ± 8.92 | 97.61 ± 1.97 | 94.59 ± 10.36 | 77.45 ± 19.18 | |
Scenario 9 | 95.34 ± 1.84 | 96.98 ± 1.34 | 96.12 ± 3.72 | 87.57 ± 8.01 | 98.31 ± 1.65 | 96.39 ± 5.77 | 60.04 ± 21.40 | |
Scenario 10 | 96.67 ± 0.92 | 97.37 ± 0.99 | 95.62 ± 3.71 | 83.76 ± 9.38 | 98.48 ± 1.26 | 93.04 ± 12.17 | 82.03 ± 20.87 |
Class | Ground Truth | ||||||
---|---|---|---|---|---|---|---|
Soybean | Corn | Wheat | Forage | Forest | Built-Up | Other | |
Soybean | 19818 | 55 | 6 | 71 | 22 | 0 | 28 |
Corn | 778 | 21052 | 0 | 31 | 0 | 25 | 114 |
Wheat | 0 | 0 | 3789 | 11 | 0 | 0 | 0 |
Forage | 4 | 0 | 82 | 2914 | 0 | 0 | 0 |
Forest | 1 | 0 | 0 | 56 | 2443 | 0 | 0 |
Built-up | 0 | 0 | 0 | 0 | 0 | 480 | 0 |
Other | 42 | 0 | 0 | 0 | 0 | 0 | 328 |
Input | OA (%) | Kappa | |
---|---|---|---|
Sub-dataset 1 | Coherency matrix+ VENµS | 93.58 ± 1.55 | 0.90 ± 0.02 |
MNF (Coherency matrix+ VENµS) | 95.85 ± 0.95 | 0.94 ± 0.01 | |
Pauli+ VENµS | 93.86 ± 1.52 | 0.91 ± 0.02 | |
MNF (Pauli+ VENµS) | 95.85 ± 0.84 | 0.94 ± 0.01 | |
Sub-dataset 2 | Coherency matrix+ VENµS | 91.59 ± 3.86 | 0.87 ± 0.06 |
MNF (Coherency matrix+ VENµS) | 92.77 ± 2.65 | 0.89 ± 0.04 | |
Pauli+ VENµS | 92.00 ± 2.98 | 0.88 ± 0.04 | |
MNF (Pauli+ VENµS) | 92.70 ± 2.93 | 0.89 ± 0.04 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liao, C.; Wang, J.; Xie, Q.; Baz, A.A.; Huang, X.; Shang, J.; He, Y. Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sens. 2020, 12, 832. https://doi.org/10.3390/rs12050832
Liao C, Wang J, Xie Q, Baz AA, Huang X, Shang J, He Y. Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sensing. 2020; 12(5):832. https://doi.org/10.3390/rs12050832
Chicago/Turabian StyleLiao, Chunhua, Jinfei Wang, Qinghua Xie, Ayman Al Baz, Xiaodong Huang, Jiali Shang, and Yongjun He. 2020. "Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network" Remote Sensing 12, no. 5: 832. https://doi.org/10.3390/rs12050832
APA StyleLiao, C., Wang, J., Xie, Q., Baz, A. A., Huang, X., Shang, J., & He, Y. (2020). Synergistic Use of Multi-Temporal RADARSAT-2 and VENµS Data for Crop Classification Based on 1D Convolutional Neural Network. Remote Sensing, 12(5), 832. https://doi.org/10.3390/rs12050832