A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images
"> Figure 1
<p>Training and test regions in Denmark (the red and blue color zones represent training and test regions).</p> "> Figure 2
<p>Schematic illustration of a convolutional long short-term memory (ConvLSTM) cell with all variables.</p> "> Figure 3
<p>Overall architecture of proposed sequential network for recognizing different crop types.</p> "> Figure 4
<p>Illustration of our approach based on the combined fully convolutional network (FCN) and ConvLSTM network.</p> "> Figure 5
<p>Visualization of augmented images using rotating and scaling.</p> "> Figure 6
<p>Normalized confusion matrix obtained from the proposed network with 15 different classes.</p> "> Figure 7
<p>Qualitative results of the proposed sequential network (red pixels in the Error column represent the error).</p> "> Figure 8
<p>The pixel-based accuracy at different dates.</p> "> Figure 9
<p>The IoU index at different dates.</p> "> Figure 10
<p>The IoU of all analyzed crops by the end of August.</p> "> Figure 11
<p>Classification confidence of the proposed method. Red pixels and blue regions correspond to the error and label images, respectively. The green color bar shows the confidence of our approach for each crop type.</p> "> Figure 12
<p>The classification confidence and the errors of the proposed method for spring oats. The color bar and scale are the same as in <a href="#remotesensing-11-00990-f011" class="html-fig">Figure 11</a>.</p> "> Figure 13
<p>Some areas which were classified with high confidence by the proposed network, whilst these fields likely had wrong label in the reference images. The color legends are similar to <a href="#remotesensing-11-00990-f007" class="html-fig">Figure 7</a>.</p> "> Figure 14
<p>Normalized confusion matrix by [<a href="#B21-remotesensing-11-00990" class="html-bibr">21</a>] on our dataset.</p> ">
Abstract
:1. Introduction
2. Materials
3. Methods
3.1. The Structure of CNN
3.2. Our Approach
3.3. Training
3.4. Accuracy Assessment
4. Results and Discussions
4.1. Comparison Between Two Structures of LSTM Units
4.2. Quantitative Classification Evaluation
4.3. Comparison with Alternative State-Of-The-Art Methods for Recognizing Crop Types
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US department of agriculture, national agricultural statistics service, cropland data layer program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
- Hunsaker, D.J.; Barnes, E.M.; Clarke, T.R.; Fitzgerald, G.J.; Pinter, P.J. Cotton irrigation scheduling using remotely sensed and FAO-56 basal crop coefficients. Trans. ASAE 2005, 48, 1395–1407. [Google Scholar] [CrossRef]
- Grimm, N.B.; Faeth, S.H.; Golubiewski, N.E.; Redman, C.L.; Wu, J.; Bai, X.; Briggs, J.M. Global change and the ecology of cities. Science 2008, 319, 756–760. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Pattey, E.; Miller, J.R.; McNairn, H.; Smith, A.; Hu, B. Estimating crop stresses, aboveground dry biomass and yield of corn using multi-temporal optical data combined with a radiation use efficiency model. Remote Sens. Environ. 2010, 114, 1167–1177. [Google Scholar] [CrossRef]
- Liu, H.; Weng, Q. Enhancing temporal resolution of satellite imagery for public health studies: A case study of West Nile Virus outbreak in Los Angeles in 2007. Remote Sens. Environ. 2012, 117, 57–71. [Google Scholar] [CrossRef]
- Zhou, T.; Pan, J.; Zhang, P.; Wei, S.; Han, T. Mapping winter wheat with multi-temporal SAR and optical images in an urban agricultural region. Sensors 2017, 17, 1210. [Google Scholar] [CrossRef] [PubMed]
- Christiansen, M.P.; Laursen, M.S.; Mikkelsen, B.F.; Teimouri, N.; Jørgensen, R.N.; Sørensen, C.A.G. Current potentials and challenges using Sentinel-1 for broadacre field remote sensing. arXiv 2018, arXiv:1809.01652. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 3431–3440. [Google Scholar]
- Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
- Foerster, S.; Kaden, K.; Foerster, M.; Itzerott, S. Crop type mapping using spectral-temporal profiles and phenological information. Comput. Electron. Agric. 2012, 89, 30–40. [Google Scholar] [CrossRef]
- Hao, P.; Zhan, Y.; Wang, L.; Niu, Z.; Shakir, M. Feature Selection of Time Series MODIS Data for Early Crop Classification Using Random Forest: A Case Study in Kansas, USA. Remote Sens. 2015, 7, 5347–5369. [Google Scholar] [CrossRef] [Green Version]
- Hoberg, T.; Rottensteiner, F.; Feitosa, R.Q.; Heipke, C. Conditional random fields for multitemporal and multiscale classification of optical satellite imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 659–673. [Google Scholar] [CrossRef]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Wooding, M.G. Satellite Radar in Agriculture; ESA Publications Division ESTEC: Noordwijk, The Netherlands, 1995. [Google Scholar]
- Erten, E.; Yuzugullu, O.; Lopez-Sanchez, J.M.; Hajnsek, I. SAR algorithms for crop height estimation: The paddy-rice case study. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Seoul, Korea, 25–29 July 2016; pp. 7117–7120. [Google Scholar]
- Moran, M.S.; Alonso, L.; Moreno, J.F.; Pilar Cendrero Mateo, M.; de la Cruz, D.; Montoro, A. A RADARSAT-2 Quad-Polarized Time Series for Monitoring Crop and Soil Conditions in Barrax, Spain. IEEE Trans. Geosci. Remote Sens. 2012, 50, 1057–1070. [Google Scholar] [CrossRef]
- Capodici, F.; D’Urso, G.; Maltese, A. Investigating the Relationship between X-Band SAR Data from COSMO-SkyMed Satellite and NDVI for LAI Detection. Remote Sens. 2013, 5, 1389–1404. [Google Scholar] [CrossRef] [Green Version]
- Cutler, M.E.J.; Boyd, D.S.; Foody, G.M.; Vetrivel, A. Estimating tropical forest biomass with a combination of SAR image texture and Landsat TM data: An assessment of predictions between regions. ISPRS J. Photogramm. Remote Sens. 2012, 70, 66–77. [Google Scholar] [CrossRef] [Green Version]
- Hirooka, Y.; Homma, K.; Maki, M.; Sekiguchi, K. Applicability of synthetic aperture radar (SAR) to evaluate leaf area index (LAI) and its growth rate of rice in farmers’ fields in Lao PDR. Field Crops Res. 2015, 176, 119–122. [Google Scholar] [CrossRef]
- Boryan, C.; Yang, Z.; Haack, B. Evaluation of Sentinel-1A C-Band Synthetic Aperture Radar for Citrus CROP Classification in Florida, United States. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 7369–7372. [Google Scholar]
- Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef]
- Mullissa, A.G.; Persello, C.; Tolpekin, V. Fully Convolutional Networks for Multi-Temporal SAR Image Classification. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 6635–6638. [Google Scholar]
- Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef]
- Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
- Whelen, T.; Siqueira, P. Time-series classification of Sentinel-1 agricultural data over North Dakota. Remote Sens. Lett. 2018, 9, 411–420. [Google Scholar] [CrossRef]
- Xiaojun, Y. Parameterizing support vector machines for land cover classification. Photogramm. Eng. Remote Sens. 2011, 77, 27–37. [Google Scholar]
- Ishii, T.; Nakamura, R.; Nakada, H.; Mochizuki, Y.; Ishikawa, H. Surface object recognition with CNN and SVM in Landsat 8 images. In Proceedings of the 2015 14th IAPR International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 18–22 May 2015; pp. 341–344. [Google Scholar]
- Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
- Niculescu, S.; Ienco, D.; Hanganu, J. Application of Deep Learning of Multi-Temporal SENTINEL-1 Images for the Classification of Coastal Vegetation Zone of the Danube Delta. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 3. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
- Sharma, A.; Liu, X.; Yang, X. Land cover classification from multi-temporal, multi-spectral remotely sensed imagery using patch-based recurrent neural networks. Neural Netw. 2018, 105, 346–355. [Google Scholar] [CrossRef] [PubMed]
- Rußwurm, M.; Körner, M. Multi-temporal land cover classification with sequential recurrent encoders. ISPRS Int. J. Geo-Inf. 2018, 7, 129. [Google Scholar] [CrossRef]
- Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic use of radar Sentinel-1 and optical Sentinel-2 imagery for crop mapping: A case study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the NIPS 2012—Neural Information Processing Systems Conference, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 8–10 June 2015; pp. 1–9. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Teimouri, N.; Dyrmann, M.; Nielsen, P.; Mathiassen, S.; Somerville, G.; Jørgensen, R. Weed growth stage estimator using deep convolutional neural networks. Sensors 2018, 18, 1580. [Google Scholar] [CrossRef]
- Owen, P.W.; Milionis, N.; Papatheodorou, I.; Sniter, K.; Viegas, H.F.; Huth, J.; Bortnowschi, R. The Land Parcel Identification System: A Useful Tool to Determine the Eligibility of Agricultural Land—But Its Management Could Be Further Improved; Special Report 25; European Court of Auditors: Luxembourg, 2016. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; and Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. arXiv 2015, arXiv:1511.00561. [Google Scholar] [CrossRef]
ID | Class | Calibration Area (ha) | Test Area (ha) | Test Area (%) |
---|---|---|---|---|
1 | Winter barley | 16,904.94 | 7526.98 | 2.95 |
2 | Winter wheat | 82,524.95 | 35,219.85 | 13.83 |
3 | Winter rye and hybrid rye | 20,425.98 | 4727.83 | 1.86 |
4 | Winter triticale | 2198.21 | 1010.08 | 0.40 |
5 | Winter rapeseed | 23,035.48 | 11,688.5 | 4.59 |
6 | Spring barley | 81,121.04 | 39,712.37 | 15.59 |
7 | Spring oats | 10,704.71 | 2480.07 | 0.97 |
8 | Maize | 25,238.24 | 12,486.71 | 4.90 |
9 | Sugar beet | 16,003.96 | 3470.58 | 1.36 |
10 | Potato | 84,710.84 | 4424.42 | 1.74 |
11 | Green grain spring barley | 3303.15 | 661.11 | 0.26 |
12 | Peas | 2411.76 | 1751.12 | 0.69 |
13 | Grass. seed | 5392.92 | 1837.69 | 0.72 |
14 | Permanent grass | 24,767.93 | 8707.79 | 3.42 |
15 | Background | 23,438.09 | 84917.7 | 33.34 |
16 | Other crops | 82,546.88 | 34,101.06 | 13.39 |
Total | 504,729.08 | 254,723.86 | 100.00 |
Classes | May | June | July | August | ||||
---|---|---|---|---|---|---|---|---|
Acc (%) | IoU | Acc (%) | IoU | Acc (%) | IoU | Acc (%) | IoU | |
Winter barley | 60 | 0.42 | 91 | 0.73 | 93 | 0.81 | 94 | 0.79 |
Winter wheat | 87 | 0.68 | 92 | 0.82 | 93 | 0.83 | 93 | 0.85 |
Winter rye and hybrid rye | 55 | 0.43 | 75 | 0.63 | 85 | 0.74 | 86 | 0.74 |
Winter triticale | 3 | 0.03 | 24 | 0.21 | 51 | 0.39 | 55 | 0.40 |
Winter rapeseed | 85 | 0.77 | 96 | 0.90 | 95 | 0.90 | 95 | 0.90 |
Spring barley | 83 | 0.66 | 85 | 0.75 | 91 | 0.82 | 90 | 0.82 |
Spring oats | 1 | 0.01 | 32 | 0.14 | 71 | 0.53 | 71 | 0.51 |
Maize | 67 | 0.35 | 86 | 0.66 | 89 | 0.74 | 89 | 0.75 |
Sugar beet | 77 | 0.64 | 91 | 0.80 | 91 | 0.79 | 90 | 0.79 |
Potato | 10 | 0.09 | 79 | 0.68 | 84 | 0.73 | 84 | 0.74 |
G. g. spring barley | 1 | 0.01 | 3 | 0.03 | 27 | 0.24 | 27 | 0.24 |
Peas | 16 | 0.14 | 47 | 0.41 | 49 | 0.44 | 51 | 0.45 |
Grass. seed | 14 | 0.13 | 64 | 0.51 | 74 | 0.61 | 70 | 0.57 |
Permanent grass | 27 | 0.20 | 30 | 0.23 | 34 | 0.27 | 36 | 0.29 |
Background | 86 | 0.71 | 87 | 0.75 | 88 | 0.77 | 88 | 0.77 |
Approach | Sensors (Sentinel) | Number of Classes | Classifier | Classification Accuracy | Our Dataset |
---|---|---|---|---|---|
Zhou et al. (2017) [6] p | S1 | 5 | Support vector machine (SVM) | 93% | _ |
Van Tricht et al. (2018) [13] p | S1 and S2 | 12 | Random forest (RF) | 82% | _ |
Mullissa et al. (2018) [22] f | S1 | 13 | FCN-DK3 | 54% | _ |
Ndikumana et al. (2018) [21] p | S1 | 11 | Recurrent neural network (RNN) | 89% (cross-validation five times) | 71% |
Wei et al. (2019) [23] p | S1 | 4 | U-Net | 85% | _ |
Our approach p | S1 | 15 | FCN-ConvLSTM | 86% | 86% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Teimouri, N.; Dyrmann, M.; Jørgensen, R.N. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sens. 2019, 11, 990. https://doi.org/10.3390/rs11080990
Teimouri N, Dyrmann M, Jørgensen RN. A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images. Remote Sensing. 2019; 11(8):990. https://doi.org/10.3390/rs11080990
Chicago/Turabian StyleTeimouri, Nima, Mads Dyrmann, and Rasmus Nyholm Jørgensen. 2019. "A Novel Spatio-Temporal FCN-LSTM Network for Recognizing Various Crop Types Using Multi-Temporal Radar Images" Remote Sensing 11, no. 8: 990. https://doi.org/10.3390/rs11080990