Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images
"> Figure 1
<p>Study area with testing area enlarged on the right.</p> "> Figure 2
<p>High-resolution orthomosaic images: (<b>a</b>) 2019 image, (<b>b</b>) 2019 image with histogram matching process, and (<b>c</b>) 2017 image.</p> "> Figure 3
<p>Research flowchart.</p> "> Figure 4
<p>Illustration of the labeled ground truth on UAV images.</p> "> Figure 5
<p>Ground truth of rich lodging portion in white: (<b>a</b>) 2017 image and (<b>b</b>) 2019 image.</p> "> Figure 6
<p>SegNet structure illustration (reproduced from Badrinarayanan et al. [<a href="#B32-remotesensing-12-00633" class="html-bibr">32</a>]).</p> "> Figure 7
<p>FCN-AlexNet structure illustration (reproduced from Long et al. [<a href="#B33-remotesensing-12-00633" class="html-bibr">33</a>]).</p> "> Figure 8
<p>Performance evaluated by Per-category F1-score for the validation dataset.</p> "> Figure 9
<p>Rice lodging identification validation: (<b>a</b>) original image, (<b>b</b>) ground truth, (<b>c1–c4</b>) represents FCN-AlexNet (RGB), (RGB+ExG), (RGB+ExGR), (RGB+ExG+ExGR), respectively. (<b>d1–d4</b>) represents SegNet (RGB), (RGB+ExG), (RGB+ExGR), (RGB+ExG+ExGR), respectively.</p> "> Figure 10
<p>F1-score and accuracy comparison on the 2017 testing dataset for rice lodging.</p> "> Figure 11
<p>F1-score and accuracy comparison on the 2019 testing dataset for rice lodging.</p> "> Figure 12
<p>Results of FCN-AlexNet identification for rice lodging on 2017 testing dataset using various spectrum information: (<b>a</b>) RGB, (<b>b</b>) RGB+ExG, (<b>c</b>) RGB+ExGR, and (<b>d</b>) RGB+ExG+ExGR.</p> "> Figure 13
<p>Results of SegNet identification for rice lodging on 2017 testing dataset using various spectrum information: (<b>a</b>) RGB, (<b>b</b>) RGB+ExG, (<b>c</b>) RGB+ExGR, and (<b>d</b>) RGB+ExG+ExGR.</p> "> Figure 14
<p>Results of MLC identification for rice lodging on 2017 testing dataset using various spectrum information: (<b>a</b>) RGB, (<b>b</b>) RGB+ExG, (<b>c</b>) RGB+ExGR, and (<b>d</b>) RGB+ExG+ExGR.</p> "> Figure 15
<p>Results of rice lodging identification on 2019 testing dataset (white: rice lodging, black: others): (<b>a</b>) FCN-AlexNet using RGB+ExGR information, (<b>b</b>) SegNet using RGB+ExGR information, (<b>c</b>) ground truth. Results comparison with ground truth for (<b>d</b>) FCN-AlexNet using RGB+ExGR information and (<b>e</b>) SegNet using RGB+ExGR information.</p> "> Figure 16
<p>Results of identification on the total 230 ha. area by the best performance models: (<b>a</b>) 2017 dataset orthoimage, (<b>b</b>) 2017 dataset prediction of FCN-AlexNet using RGB+ExGR information, (<b>c</b>) 2017 dataset prediction of SegNet using RGB+ExGR, (<b>d</b>) ground truth, (<b>e</b>) 2019 dataset orthoimage, (<b>f</b>) 2019 dataset prediction of FCN-AlexNet using RGB+ExGR information, <b>(g</b>) 2019 dataset prediction of SegNet using RGB+ExGR.</p> ">
Abstract
:1. Introduction
- Rice lodging spectrum information is obtained from UAV images collected from a study area (about 40 hectares) in Taiwan to reduce the workload of in-situ manual visual field observations.
- UAV images are used to perform rice lodging classification by sematic segmentation neural network models, which aim to improve the accuracy of lodging assessment and serves as evidence of subsequent disaster subsidies.
- Multiple information, including visible light spectrum and vegetation index information, are involved in the proposed rice lodging assessment method to improve image classification accuracy.
- Two standard image semantic segmentation network models are tested. Their applicability is evaluated based on their computational speed and classification accuracy.
- Establish a rice lodging image dataset that can serve as a valuable resource for expert systems, disaster relief assistance, and agricultural insurance application data.
2. Materials and Methods
2.1. Data Description
2.2. Training-validation, and Testing Datasets
2.3. Vegetation Indices
2.4. Semantic Segmentation Model Training
2.5. Evaluation Matrices
3. Results and Discussion
3.1. Training-validation Model Evaluation
3.2. Testing Data Inference Evaluation
4. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Taiwan Agriculture and Food Agency, Council of Agriculture, Executive Yuan. Agriculture Statistic Year Book 2014. Available online: https://eng.coa.gov.tw/upload/files/eng_web_structure/2503255/8-4.pdf (accessed on 24 January 2020).
- Taiwan Agriculture and Food Agency, Council of Agriculture, Executive Yuan. Agriculture Statistic Year Book 2015. Available online: https://eng.coa.gov.tw/upload/files/eng_web_structure/2505278/A08-4_104.pdf (accessed on 24 January 2020).
- Taiwan Agriculture and Food Agency, Council of Agriculture, Executive Yuan. Agriculture Statistic Year Book 2016. Available online: https://eng.coa.gov.tw/upload/files/eng_web_structure/2505400/AA-2_A08-4_105.pdf (accessed on 24 January 2020).
- Taiwan Agriculture and Food Agency, Council of Agriculture, Executive Yuan. Agriculture Statistic Year Book 2017. Available online: https://eng.coa.gov.tw/upload/files/eng_web_structure/2505508/ZA_ZA10-4_106.pdf (accessed on 24 January 2020).
- Taiwan Agriculture and Food Agency, Council of Agriculture, Executive Yuan. Agriculture Statistic Year Book 2018. Available online: https://eng.coa.gov.tw/upload/files/eng_web_structure/2505565/ZA_ZA10-4_280_107.pdf (accessed on 24 January 2020).
- Yang, M.D.; Yang, Y.F.; Hsu, S.C. Application of remotely sensed data to the assessment of terrain factors affecting Tsao-Ling landside. Can. J. Remote Sens. 2004, 30, 593–603. [Google Scholar] [CrossRef]
- Yang, M.D. A genetic algorithm (GA) based automated classifier for remote sensing imagery. Can. J. Remote Sens. 2007, 33, 593–603. [Google Scholar] [CrossRef]
- Yang, M.D.; Su, T.C.; Hsu, C.H.; Chang, K.C.; Wu, A.M. Mapping of the 26 December 2004 tsunami disaster by using FORMOSAT-2 images. Int. J. Remote Sens. 2007, 28, 3071–3091. [Google Scholar] [CrossRef]
- Chauhan, S.; Darvishzadeh, R.; Boschetti, M.; Pepe, M.; Nelson, A. Remote Sensing-Based Crop Lodging Assessment: Current Status and Perspectives. ISPRS J. Photogramm. Remote Sens. 2019, 151, 124–140. [Google Scholar] [CrossRef] [Green Version]
- Zhao, L.; Yang, J.; Li, P.; Shi, L.; Zhang, L. Characterizing Lodging Damage in Wheat and Canola using Radarsat-2 Polarimetric SAR Data. Remote Sens. Lett. 2017, 8, 667–675. [Google Scholar] [CrossRef]
- Shu, M.; Zhou, L.; Gu, X.; Ma, Y.; Sun, Q.; Yang, G.; Zhou, C. Monitoring of maize lodging using multi-temporal Sentinel-1 SAR data. Adv. Space Res. 2020, 65, 470–480. [Google Scholar] [CrossRef]
- Han, D.; Yang, H.; Yang, G.; Qiu, C. Monitoring Model of Corn Lodging Based on Sentinel-1 Radar Image. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; pp. 1–5. [Google Scholar] [CrossRef]
- Coquil, B. FARMSTAR a Fully Operational System for Crop Management from Satellite Imagery. In Proceedings of the 7th International Conference on Precision Agriculture Conference, Minneapolis, MN, USA, 25–28 July 2004. [Google Scholar]
- Yang, M.; Huang, K.; Kuo, Y.; Tsai, H.; Lin, L. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery. Remote Sens. 2017, 9, 583. [Google Scholar] [CrossRef] [Green Version]
- Liu, Z.; Li, C.; Wang, Y.; Huang, W.; Ding, X.; Zhou, B.; Wu, H.; Wang, D.; Shi, J. Comparison of Spectral Indices and Principal Component Analysis for Differentiating Lodged Rice Crop from Normal Ones. In Proceedings of the International Conference on Computer and Computing Technologies in Agriculture (CCTA), Beijing, China, 29–31 October 2011; pp. 84–92. [Google Scholar] [CrossRef] [Green Version]
- Wilke, N.; Siegmann, B.; Klingbeil, L.; Burkart, A.; Kraska, T.; Muller, O.; van Doorn, A.; Heinemann, S.; Rascher, U. Quantifying Lodging Percentage and Lodging Severity using a UAV-Based Canopy Height Model Combined with an Objective Threshold Approach. Remote Sens. 2019, 11, 515. [Google Scholar] [CrossRef] [Green Version]
- Zhao, X.; Yuan, Y.; Song, M.; Ding, Y.; Lin, F.; Liang, D.; Zhang, D. Use of Unmanned Aerial Vehicle Imagery and Deep Learning Unet to Extract Rice Lodging. Sensors 2019, 19, 3859. [Google Scholar] [CrossRef] [Green Version]
- Mardanisamani, S.; Maleki, F.; Hosseinzadeh Kassani, S.; Rajapaksa, S.; Duddu, H.; Wang, M.; Shirtliffe, S.; Ryu, S.; Josuttes, A.; Zhang, T. Crop Lodging Prediction from UAV-Acquired Images of Wheat and Canola using a DCNN Augmented with Handcrafted Texture Features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Kwak, G.; Park, N. Impact of Texture Information on Crop Classification with Machine Learning and UAV Images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef] [Green Version]
- Yang, Q.; Shi, L.; Han, J.; Zha, Y.; Zhu, P. Deep Convolutional Neural Networks for Rice Grain Yield Estimation at the Ripening Stage using UAV-Based Remotely Sensed Images. Field Crops Res. 2019, 235, 142–153. [Google Scholar] [CrossRef]
- Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Zhang, L. A Fully Convolutional Network for Weed Mapping of Unmanned Aerial Vehicle (UAV) Imagery. PLoS ONE 2018, 13, e0196302. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sa, I.; Popović, M.; Khanna, R.; Chen, Z.; Lottes, P.; Liebisch, F.; Nieto, J.; Stachniss, C.; Walter, A.; Siegwart, R. Weedmap: A Large-Scale Semantic Weed Mapping Framework using Aerial Multispectral Imaging and Deep Neural Network for Precision Farming. Remote Sens. 2018, 10, 1423. [Google Scholar] [CrossRef] [Green Version]
- Ma, X.; Deng, X.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully Convolutional Network for Rice Seedling and Weed Image Segmentation at the Seedling Stage in Paddy Fields. PLoS ONE 2019, 14, e0215676. [Google Scholar] [CrossRef] [PubMed]
- Ferentinos, K.P. Deep Learning Models for Plant Disease Detection and Diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Kerkech, M.; Hafiane, A.; Canals, R. Deep Learning Approach with Colorimetric Spaces and Vegetation Indices for Vine Diseases Detection in UAV Images. Comput. Electron. Agric. 2018, 155, 237–243. [Google Scholar] [CrossRef]
- Fuentes-Pacheco, J.; Torres-Olivares, J.; Roman-Rangel, E.; Cervantes, S.; Juarez-Lopez, P.; Hermosillo-Valadez, J.; Rendón-Mancha, J.M. Fig Plant Segmentation from Aerial Images using a Deep Convolutional Encoder-Decoder Network. Remote Sens. 2019, 11, 1157. [Google Scholar] [CrossRef] [Green Version]
- Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep Learning for Plant Identification using Vein Morphological Patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef]
- Gonzalez, R.C.; Woods, R.E. Digital Image Processing; Pearson Education: Cranbury, NJ, USA, 2002. [Google Scholar]
- Richards, J.A.; Richards, J. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar] [CrossRef]
- Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D. Color Indices for Weed Identification under various Soil, Residue, and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
- Meyer, G.E.; Neto, J.C. Verification of Color Vegetation Indices for Automated Crop Imaging Applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Yang, M.D.; Su, T.C.; Pan, N.F.; Yang, Y.F.; Su, T.C.; Pan, N.F.; Yang, Y.F. Systematic image quality assessment for sewer inspection. Expert Syst. Appl. 2011, 38, 1766–1776. [Google Scholar] [CrossRef]
- Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. Enet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar] [CrossRef] [Green Version]
Description | Sony QX100 | DJI Phantom 4 Pro |
---|---|---|
Pixel size (um) | 2.4 | |
Focal length (mm) | 10.4 | 8.8 |
Resolution (width × height) (pixel) | 5472 × 3648 | |
Image data (bit) | 8 | |
Spatial resolution from 200m height image (cm/pixel) | 4.64 | 5.48 |
Sensor size (mm) | 13.2 × 8.8 | |
Field of View (horizontal, vertical) (degree) | 64.8, 45.9 | 73.7, 53.1 |
Camera | Sony QX-100 | DJI Phantom 4 Pro | ||
---|---|---|---|---|
Collection Date | 2017/06/08 | 2019/05/23 | ||
Resolution (width/height) | 46,343 × 25,658 | 15,977 × 8191 | ||
Flight Height (m) | 230 | 200 | ||
Area covered (ha) | 430 | 120 | ||
GSD (cm) | 5.3 | 5.7 | ||
Tile resolution (col/row) pixels | 480 × 480 | 480 × 480 | 1440 × 1440 | 1440 × 1440 |
# effective tiles | 2082 | 694 | 72 | 72 |
Attribute | train | validation | test | test |
Vegetation Indices | Formula | References |
---|---|---|
Woebbecke et al. (1995) [30] | ||
Meyer and Neto (2008) [31] | ||
Meyer and Neto (2008) [31] |
CPU | Intel Xeon Gold 6154 @3.00GHz (4 cores/GPU node) |
RAM | 90GB/GPU node |
Accelerator | NVIDIA Tesla V100 32GB SMX2/GPU node |
Image | TensorFlow-19.08-py3 |
Libraries | Python 3.6.8, NumPy 1.14.5, scikit-image 0.16.1, TensorFlow-GPU 1.14, Keras 2.3.1, Jupiter notebook, CUDA 10.1 |
Evaluation Matrices | Formula |
---|---|
Precision | |
Recall | |
Accuracy | |
Overall accuracy | |
score | |
score |
Model | Information | Rice Paddy (%) | Rice Lodging (%) | Road (%) | Bareland (%) | Background (%) | OA (%) |
---|---|---|---|---|---|---|---|
FCN-AlexNet | RGB | 92.77 | 77.91 | 60.04 | 92.95 | 92.94 | 90.57 |
RGB+ExG | 92.51 | 76.32 | 60.67 | 93.72 | 92.91 | 90.40 | |
RGB+ExGR | 93.00 | 80.08 | 56.95 | 95.36 | 93.52 | 91.24 | |
RGB+ExG+ExGR | 92.49 | 77.01 | 57.74 | 94.31 | 92.86 | 90.38 | |
SegNet | RGB | 91.49 | 70.00 | 62.13 | 92.10 | 93.27 | 89.56 |
RGB+ExG | 91.02 | 75.37 | 67.37 | 93.66 | 93.10 | 89.70 | |
RGB+ExGR | 91.29 | 74.17 | 4.24 | 92.24 | 91.11 | 88.04 | |
RGB+ExG+ExGR | 90.91 | 70.44 | 46.68 | 91.10 | 88.06 | 85.80 |
Classifier | Information | Precision (%) | Recall (%) | Accuracy (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
FCN-AlexNet | RGB | 84.73 | 82.43 | 94.43 | 83.56 | 59 |
RGB+ExG | 84.92 | 80.85 | 94.25 | 82.84 | 65 | |
RGB+ExGR | 77.02 | 88.80 | 93.53 | 82.49 | 66 | |
RGB+ExG+ExGR | 82.02 | 84.44 | 94.15 | 83.21 | 72 | |
SegNet | RGB | 83.10 | 69.06 | 92.28 | 75.43 | 101 |
RGB+ExG | 71.03 | 89.64 | 91.94 | 79.26 | 108 | |
RGB+ExGR | 73.96 | 83.36 | 92.10 | 78.38 | 109 | |
RGB+ExG+ExGR | 87.66 | 57.38 | 91.30 | 69.36 | 106 | |
MLC | RGB | 57.43 | 96.16 | 87.10 | 71.91 | 1342 |
RGB+ExG | 61.11 | 92.65 | 88.61 | 73.65 | 1538 | |
RGB+ExGR | 63.42 | 91.76 | 89.50 | 75.00 | 1492 | |
RGB+ExG+ExGR | 56.47 | 96.64 | 86.63 | 71.29 | 1526 |
Classifier | Information | Precision (%) | Recall (%) | Accuracy (%) | F1-Score (%) | Time (s) |
---|---|---|---|---|---|---|
FCN-AlexNet | RGB | 99.12 | 39.59 | 90.66 | 56.58 | 57 |
RGB+ExG | 95.03 | 59.18 | 93.25 | 72.94 | 67 | |
RGB+ExGR | 95.32 | 66.39 | 94.33 | 78.27 | 68 | |
RGB+ExG+ExGR | 93.19 | 67.03 | 94.18 | 77.97 | 71 | |
SegNet | RGB | 87.55 | 38.65 | 89.72 | 53.63 | 99 |
RGB+ExG | 57.06 | 67.50 | 87.19 | 61.84 | 109 | |
RGB+ExGR | 81.35 | 58.59 | 91.57 | 68.12 | 107 | |
RGB+ExG+ExGR | 82.47 | 29.07 | 88.14 | 42.99 | 113 | |
MLC | RGB | 58.67 | 79.07 | 88.22 | 67.36 | 1416 |
RGB+ExG | 50.99 | 87.71 | 85.15 | 64.49 | 1572 | |
RGB+ExGR | 56.88 | 86.33 | 87.83 | 68.58 | 1512 | |
RGB+ExG+ExGR | 57.03 | 80.67 | 87.68 | 66.82 | 1562 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Tsai, H.P. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633. https://doi.org/10.3390/rs12040633
Yang M-D, Tseng H-H, Hsu Y-C, Tsai HP. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing. 2020; 12(4):633. https://doi.org/10.3390/rs12040633
Chicago/Turabian StyleYang, Ming-Der, Hsin-Hung Tseng, Yu-Chun Hsu, and Hui Ping Tsai. 2020. "Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images" Remote Sensing 12, no. 4: 633. https://doi.org/10.3390/rs12040633
APA StyleYang, M. -D., Tseng, H. -H., Hsu, Y. -C., & Tsai, H. P. (2020). Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing, 12(4), 633. https://doi.org/10.3390/rs12040633