Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field
"> Figure 1
<p>Hyperspectral imagery field experiment and manual hyperspectral image <b>Region Of Interst</b> (ROI) for diseased, healthy, and background.</p> "> Figure 2
<p>The deep convolutional neural network.</p> "> Figure 3
<p>Deep convolutional recurrent neural network with bidirectional Long Short Term (LSTM) and bidirectional GRU for hyperspectral image classification.</p> "> Figure 4
<p>Principal Component Analysis (PCA) image of the original data and the mean-removed data. (<b>a</b>) Illustration of the original data by PCA; and, (<b>b</b>) Illustration of the mean removed data by PCA. The red points are the first-day samples, and the yellow points are the second-day samples.</p> "> Figure 5
<p>Accuracy and loss in the training dataset and validation dataset. (<b>a</b>,<b>b</b>) Illustration of the accuracy and loss by 1D-CNN and 2D-CNN. (<b>c</b>,<b>d</b>) Illustration of the accuracy and loss by LSTM and GRU. (<b>e</b>,<b>f</b>) Illustration of the accuracy and loss by 2D-CNN-LSTM and 2D-CNN-GRU. (<b>g</b>,<b>h</b>) Illustration of the accuracy and loss by 2D-CNN-BidLSTM and 2D-CNN-BidGRU.</p> "> Figure 6
<p>Confusion matrix of the testing dataset.</p> "> Figure 7
<p>Precision and Recall.</p> "> Figure 8
<p>Original hyperspectral image and the grey-image mapped using different models (white represents the background dataset; grey represents the healthy dataset; and the black represents the disease dataset).</p> ">
Abstract
:1. Introduction
- Design and complete a hyperspectral image classification experiment for healthy head and Fusarium head blight disease in the wild field. The hyperspectral images are divided by pixels of different classes into a training dataset, a validation dataset, and a testing dataset to training the model.
- Compare and improve the different deep neural networks for hyperspectral image classification. These neural networks include DCNN with two input data structures, DRNN with Long Short Term (LSTM), and the Gated Recurrent Unit (GRU), and an improved hybrid CRNN.
- Take advantage of these assessment methods to determine the best model for classifying hyperspectral image pixels. Different SVM and deep neural network models are assessed and analysed on training dataset, validation dataset, and the testing dataset.
2. Materials and Methods
2.1. Plant Material
2.2. Experiment Apparatus and Procedure in the Field
- the tripod apparatus is placed about 30 cm in front of these samples;
- the high spectral camera adjusts to a height of 1.5 m from the ground
- the cloud platform is 45 degrees in the horizontal direction;
- the scan range is −30 degrees to +30 degrees; and,
- the measurement times were set from 11:00 a.m. to 2:00 p.m. to acquire sufficient light.
3. Model Analysis
3.1. The Deep Convolutional Neural Network
3.2. Deep Recurrent Neural Network
3.3. Deep Convolutional Recurrent Neural Network
3.4. Evaluation Method
4. Results
4.1. Experiment Dataset and Analysis
4.2. Model Training
4.3. Model Testing
4.4. Original Hyperspectral Image Mapping
5. Discussion
5.1. Extracting and Representing Deep Characteristics for Disease Symptoms
5.2. Assessing the Performance of Different Models for Hyperspectral Image Pixel Classification
5.3. Next Steps
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Karlsson, I.; Friberg, H.; Kolseth, A.K.; Steinberg, C.; Persson, P. Agricultural factors affecting Fusarium communities in wheat kernels. Int. J. Food Microbiol. 2017, 252, 53–60. [Google Scholar] [CrossRef] [PubMed]
- Steiner, B.; Buerstmayr, M.; Michel, S.; Schweiger, W.; Lemmens, M.; Buerstmayr, H. Breeding strategies and advances in line selection for Fusarium head blight resistance in wheat. Trop. Plant Pathol. 2017, 42, 165–174. [Google Scholar] [CrossRef]
- Šíp, V.; Chrpová, J.; Štěrbová, L.; Palicová, J. Combining ability analysis of fusarium head blight resistance in European winter wheat varieties. Cereal Res. Commun. 2017, 45, 260–271. [Google Scholar] [CrossRef]
- Palacios, S.A.; Erazo, J.G.; Ciasca, B.; Lattanzio, V.M.T.; Reynoso, M.M.; Farnochi, M.C.; Torres, A.M. Occurrence of deoxynivalenol and deoxynivalenol-3-glucoside in durum wheat from Argentina. Food Chem. 2017, 230, 728–734. [Google Scholar] [CrossRef] [PubMed]
- Peiris, K.H.S.; Dong, Y.; Davis, M.A.; Bockus, W.W.; Dowell, F.E. Estimation of the Deoxynivalenol and Moisture Contents of Bulk Wheat Grain Samples by FT-NIR Spectroscopy. Cereal Chem. J. 2017, 94, 677–682. [Google Scholar] [CrossRef]
- Bravo, C.; Moshou, D.; West, J.; McCartney, A.; Ramon, H. Early disease detection in wheat fields using spectral reflectance. Biosyst. Eng. 2003, 84, 137–145. [Google Scholar] [CrossRef]
- Shi, Y.; Huang, W.; Zhou, X. Evaluation of wavelet spectral features in pathological detection and discrimination of yellow rust and powdery mildew in winter wheat with hyperspectral reflectance data. J. Appl. Remote Sens. 2017, 11, 26025. [Google Scholar] [CrossRef]
- Kuenzer, C.; Knauer, K. Remote sensing of rice crop areas. Int. J. Remote Sens. 2013, 34, 2101–2139. [Google Scholar] [CrossRef]
- Chattaraj, S.; Chakraborty, D.; Garg, R.N.; Singh, G.P.; Gupta, V.K.; Singh, S.; Singh, R. Hyperspectral remote sensing for growth-stage-specific water use in wheat. Field Crop. Res. 2013, 144, 179–191. [Google Scholar] [CrossRef]
- Ravikanth, L.; Jayas, D.S.; White, N.D.G.; Fields, P.G.; Sun, D.-W. Extraction of Spectral Information from Hyperspectral Data and Application of Hyperspectral Imaging for Food and Agricultural Products. Food Bioprocess Technol. 2017, 10, 1–33. [Google Scholar] [CrossRef]
- Scholl, J.F.; Dereniak, E.L. Fast wavelet based feature extraction of spatial and spectral information from hyperspectral datacubes. Proc. SPIE 2004, 5546, 285–293. [Google Scholar]
- Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017. [Google Scholar] [CrossRef]
- Bock, C.H.; Poole, G.H.; Parker, P.E.; Gottwald, T.R. Plant disease severity estimated visually, by digital photography and image analysis, and by hyperspectral imaging. Crit. Rev. Plant Sci. 2010, 29, 59–107. [Google Scholar] [CrossRef]
- Bauriegel, E.; Giebel, A.; Geyer, M.; Schmidt, U.; Herppich, W.B. Early detection of Fusarium infection in wheat using hyper-spectral imaging. Comput. Electron. Agric. 2011, 75, 304–312. [Google Scholar] [CrossRef]
- Dammer, K.H.; Möller, B.; Rodemann, B.; Heppner, D. Detection of head blight (Fusarium ssp.) in winter wheat by color and multispectral image analyses. Crop Prot. 2011, 30, 420–428. [Google Scholar] [CrossRef]
- West, J.S.; Canning, G.G.M.; Perryman, S.A.; King, K. Novel Technologies for the detection of Fusarium head blight disease and airborne inoculum. Trop. Plant Pathol. 2017, 42, 203–209. [Google Scholar] [CrossRef]
- Widodo, A.; Yang, B.-S. Support vector machine in machine condition monitoring and fault diagnosis. Mech. Syst. Signal Process. 2007, 21, 2560–2574. [Google Scholar] [CrossRef]
- Qiao, X.; Jiang, J.; Qi, X.; Guo, H.; Yuan, D. Utilization of spectral-spatial characteristics in shortwave infrared hyperspectral images to classify and identify fungi-contaminated peanuts. Food Chem. 2017, 220, 393–399. [Google Scholar] [CrossRef] [PubMed]
- Fauvel, M.; Benediktsson, J.A.; Chanussot, J.; Sveinsson, J.R. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3804–3814. [Google Scholar] [CrossRef]
- Datta, A.; Ghosh, S.; Ghosh, A. Band elimination of hyperspectral imagery using partitioned band image correlation and capacitory discrimination. Int. J. Remote Sens. 2014, 35, 554–577. [Google Scholar] [CrossRef]
- Delgado, M.; Cirrincione, G.; Espinosa, A.G.; Ortega, J.A.; Henao, H. Dedicated hierarchy of neural networks applied to bearings degradation assessment. In Proceedings of the 9th IEEE International Symposium on Diagnostics for Electric Machines, Valencia, Spain, 27–30 August 2013; pp. 544–551. [Google Scholar]
- Li, H.; Lin, Z.; Shen, X.; Brandt, J.; Hua, G. A convolutional neural network cascade for face detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 5325–5334. [Google Scholar]
- Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed]
- Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Halicek, M.; Lu, G.; Little, J.V.; Wang, X.; Patel, M.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017, 22, 60503. [Google Scholar] [CrossRef] [PubMed]
- Guidici, D.; Clark, M. One-Dimensional Convolutional Neural Network Land-Cover Classification of Multi-Seasonal Hyperspectral Imagery in the San Francisco Bay Area, California. Remote Sens. 2017, 9, 629. [Google Scholar] [CrossRef]
- Bengio, S.; Vinyals, O.; Jaitly, N.; Shazeer, N. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. In Proceedings of the International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 1–9. [Google Scholar]
- Byeon, W.; Breuel, T.M.; Raue, F.; Liwicki, M. Scene labeling with LSTM recurrent neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3547–3555. [Google Scholar]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef]
- Wu, H.; Prasad, S. Convolutional recurrent neural networks for hyperspectral data classification. Remote Sens. 2017, 9, 298. [Google Scholar] [CrossRef]
- Shahin, M.A.; Symons, S.J. Detection of Fusarium damaged kernels in Canada Western Red Spring wheat using visible/near-infrared hyperspectral imaging and principal component analysis. Comput. Electron. Agric. 2011, 75, 107–112. [Google Scholar] [CrossRef]
- Barbedo, J.G.A.; Tibola, C.S.; Fernandes, J.M.C. Detecting Fusarium head blight in wheat kernels using hyperspectral imaging. Biosyst. Eng. 2015, 131, 65–76. [Google Scholar] [CrossRef]
- Jaillais, B.; Roumet, P.; Pinson-Gadais, L.; Bertrand, D. Detection of Fusarium head blight contamination in wheat kernels by multivariate imaging. Food Control 2015, 54, 250–258. [Google Scholar] [CrossRef]
- Klein, M.E.; Aalderink, B.J.; Padoan, R.; De Bruin, G.; Steemers, T.A.G. Quantitative hyperspectral reflectance imaging. Sensors 2008, 8, 5576–5618. [Google Scholar] [CrossRef] [PubMed]
- Xiao, T.; Xu, Y.; Yang, K.; Zhang, J.; Peng, Y.; Zhang, Z. The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 842–850. [Google Scholar]
- Xu, J.; Luo, X.; Wang, G.; Gilmore, H.; Madabhushi, A. A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 2016, 191, 214–223. [Google Scholar] [CrossRef] [PubMed]
- Hsu, W.N.; Zhang, Y.; Lee, A.; Glass, J. Exploiting depth and highway connections in convolutional recurrent deep neural networks for speech recognition. In Proceedings of the Annual Conference of the International Speech Communication Association, San Francisco, CA, USA, 8–12 September 2016; pp. 395–399. [Google Scholar]
- Yu, S.; Jia, S.; Xu, C. Convolutional neural networks for hyperspectral image classification. Neurocomputing 2017, 219, 88–98. [Google Scholar] [CrossRef]
- Yue, J.; Mao, S.; Li, M. A deep learning framework for hyperspectral image classification using spatial pyramid pooling. Remote Sens. Lett. 2016, 7, 875–884. [Google Scholar] [CrossRef]
- Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
- Sutskever, I.; Hinton, G. Temporal-Kernel Recurrent Neural Networks. Neural Netw. 2010, 23, 239–243. [Google Scholar] [CrossRef] [PubMed]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. Comput. Sci. 2014, 1–15. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Graves, A. Generating Sequences with Recurrent Neural Networks. Comput. Sci. 2013, 1–43. [Google Scholar] [CrossRef]
- Graves, A.; Mohamed, A.; Hinton, G. Speech Recognition with Deep Recurrent Neural Networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
- Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Comput. Sci. 2014, 1724–1734. [Google Scholar] [CrossRef]
- Tang, D.; Qin, B.; Liu, T. Document Modeling with Gated Recurrent Neural Network for Sentiment Classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1422–1432. [Google Scholar]
- Zuo, Z.; Shuai, B.; Wang, G.; Liu, X.; Wang, X.; Wang, B.; Chen, Y. Convolutional Recurrent Neural Networks: Learning Spatial Dependencies for Image Representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; pp. 18–26. [Google Scholar]
- Ordóñez, F.J.; Roggen, D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, e115. [Google Scholar] [CrossRef] [PubMed]
- Chen, J.; Chaudhari, N.S. Improvement of bidirectional recurrent neural network for learning long-term dependencies. In Proceedings of the 17th International Conference on Pattern Recognition, ICPR, Cambridge, UK, 26 August 2004; Volume 4, pp. 593–596. [Google Scholar]
- Huang, Z.; Xu, W.; Yu, K. Bidirectional LSTM-CRF Models for Sequence Tagging. Comput. Sci. 2015. [Google Scholar] [CrossRef]
- Fan, B.; Wang, L.; Soong, F.K.; Xie, L. Photo-real talking head with deep bidirectional LSTM. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, 19–24 April 2015; pp. 4884–4888. [Google Scholar]
- Le, T.T.H.; Kim, J.; Kim, H. Classification performance using gated recurrent unit Recurrent Neural Network on energy disaggregation. In Proceedings of the International Conference on Machine Learning and Cybernetics (ICMLC), Jeju, Korea, 10–13 July 2016; Volume 1, pp. 105–110. [Google Scholar]
- Zhao, Z.; Yang, Q.; Cai, D.; He, X.; Zhuang, Y. Video question answering via hierarchical spatio-temporal attention networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 3518–3524. [Google Scholar]
- Powers, D.M.W. Evaluation: From Precision, Recall and F-Measure to Roc, Informedness, Markedness & Correlation. J. Mach. Learn. Technol. 2011, 2, 37–63. [Google Scholar]
- HE, H.; Garcia, E.a. Learning from Imbalanced Data Sets. IEEE Trans. Knowl. Data Eng. 2010, 21, 1263–1264. [Google Scholar]
- Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef]
- Debray, T. Classification in Imbalanced Datasets. Master’s Thesis, Maastricht University, Maastricht, The Netherlands, 2009. [Google Scholar] [CrossRef]
- Liong, V.E.; Lu, J.; Wang, G. Face recognition using Deep PCA. In Proceedings of the 9th International Conference on Information, Communications and Signal Processing (ICICS), Tainan, Taiwan, 10–13 December 2013; pp. 1–5. [Google Scholar]
- Andreolini, M.; Casolari, S.; Colajanni, M. Trend-based load balancer for a distributed Web system. In Proceedings of the IEEE International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunications Systems, Istanbul, Turkey, 24–26 October 2007; pp. 288–294. [Google Scholar] [CrossRef]
- Farrell, M.D.; Mersereau, R.M. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote Sens. Lett. 2005, 2, 192–195. [Google Scholar] [CrossRef]
- Bajorski, P. Statistical inference in PCA for hyperspectral images. IEEE J. Sel. Top. Signal Process. 2011, 5, 438–445. [Google Scholar] [CrossRef]
- Zhang, H. Perceptual display strategies of hyperspectral imagery based on PCA and ICA. In Proceedings of the international society for optics and photonics, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, 62330X, Orlando, FL, USA, 4 May 2006; Volume 6233, p. 62330–1. [Google Scholar] [CrossRef]
- Kline, D.M.; Berardi, V.L. Revisiting squared-error and cross-entropy functions for training neural network classifiers. Neural Comput. Appl. 2005, 14, 310–318. [Google Scholar] [CrossRef]
- Lisboa, P.; David, P.; Bourdès, V.; Bonnevay, S.; Defrance, R.; Pérol, D.; Chabaud, S.; Bachelot, T.; Gargi, T.; Négrier, S. Comparison of Artificials Neural Network with Logistic Regression as Classification Models for Variable Selection for Prediction of Breast Cancer Patient Outcomes. Adv. Artif. Neural Syst. 2010, 2010, 1–12. [Google Scholar]
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.-A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
- Chin, W.S.; Zhuang, Y.; Juan, Y.C.; Lin, C.J. A learning-rate schedule for stochastic gradient methods to matrix factorization. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Ho Chi Minh City, Vietnam, 19–22 May 2015; Volume 9077, pp. 442–455. [Google Scholar] [CrossRef]
- Clevert, D.-A.; Unterthiner, T.; Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). In Proceedings of the Inter Conference on Learning Representations, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Wan, L.; Zeiler, M.; Zhang, S.; LeCun, Y.; Fergus, R. Regularization of neural networks using dropconnect. In Proceedings of the 30th International Conference on Ma-chine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 109–111. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Chicco, D. Ten quick tips for machine learning in computational biology. BioData Min. 2017, 10, 1–17. [Google Scholar] [CrossRef] [PubMed]
CNN Configuration | |
---|---|
1D-CNN | 2D-CNN |
Input (256 × 1) Spectrum | Input (16 × 16) Spectrum |
Convolution 3–64 | Convolution 3.3–64 |
Convolution 3–64 | Convolution 3.3–64 |
Maxpooling 2 | Maxpooling 2.2 |
Convolution 3–64 | Convolution 3.3–64 |
Convolution 3–64 | Convolution 3.3–64 |
Maxpooling 2 | Maxpooling 2.2 |
FC Dense | FC Dense |
Softmax | Softmax |
RNN Configuration | |
---|---|
LSTM | GRU |
Stacked LSTM 64 | Stacked GRU 64 |
Stacked LSTM 64 | Stacked GRU 64 |
Stacked LSTM 64 | Stacked GRU 64 |
Softmax | Softmax |
DCRNN Configuration | |||
---|---|---|---|
2D-CNN-LSTM | 2D-CNN-GRU | 2D-CNN-BidLSTM | 2D-CNN-BidGRU |
Input (16 × 16) | Input (16 × 16) | Input (16 × 16) | Input (16 × 16) |
Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 |
Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 |
Maxpooling 2.2 | Maxpooling 2.2 | Maxpooling 2.2 | Maxpooling 2.2 |
Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 |
Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 | Convolution 3.3–64 |
Maxpooling 2.2 | Maxpooling 2.2 | Maxpooling 2.2 | Maxpooling 2.2 |
Reshape layer | Reshape layer | Reshape layer | Reshape layer |
Stacked LSTM 64 | Stacked GRU 64 | Stacked Bidirectional LSTM 64 | Stacked Bidirectional GRU 64 |
Stacked LSTM 64 | Stacked GRU 64 | Stacked Bidirectional LSTM 64 | Stacked Bidirectional GRU 64 |
Stacked LSTM 64 | Stacked GRU 64 | Stacked Bidirectional LSTM 64 | Stacked Bidirectional GRU 64 |
Softmax | Softmax | Softmax | Softmax |
Sample Name | Original Data (Pixel Number) | |||
---|---|---|---|---|
Background | Health | Disease | Total | |
Sample 1 on the first day | 30,821 | 85,777 | 13,917 | 130,515 |
Sample 2 on the first day | 65,452 | 72,933 | 6517 | 144,902 |
Sample 3 on the first day | 74,778 | 89,198 | 17,480 | 181,456 |
Sample 1 on the second day | 10,535 | 29,001 | 24,635 | 64,171 |
Sample 2 on the second day | 38,515 | 33,913 | 31,114 | 103,542 |
Sample 3 on the second day | 64,105 | 81,521 | 38,988 | 184,614 |
Total | 284,206 | 392,343 | 132,651 | 809,200 |
Method | RBF-SVM | 1D-CNN | 2D-CNN | LSTM | GRU | 2D-CNN-LSTM | 2D-CNN-GRU | 2D-CNN-BidLSTM | 2D-CNN-BidGRU |
---|---|---|---|---|---|---|---|---|---|
Training ACC | 0.705 | 0.815 | 0.839 | 0.839 | 0.847 | 0.842 | 0.842 | 0.843 | 0.849 |
Validation ACC | 0.706 | 0.811 | 0.830 | 0.812 | 0.82 | 0.830 | 0.838 | 0.840 | 0.846 |
Training Loss | - 1 | 0.439 | 0.384 | 0.373 | 0.356 | 0.370 | 0.372 | 0.369 | 0.355 |
Validation Loss | - 1 | 0.437 | 0.394 | 0.443 | 0.420 | 0.400 | 0.376 | 0.372 | 0.358 |
Epoch | - 1 | 75 | 299 | 124 | 84 | 62 | 250 | 252 | 269 |
Training time (h) | 1.7 | 16 | 11.8 | 6.2 | 13 | 11.6 | 11.8 | 12 | 12.4 |
Method | RBF-SVM | 1D-CNN | 2D-CNN | LSTM | GRU | 2D-CNN-LSTM | 2D-CNN-GRU | 2D-CNN-BidLSTM | 2D-CNN-BidGRU |
---|---|---|---|---|---|---|---|---|---|
F1-score (Background) | 0.75 | 0.84 | 0.86 | 0.82 | 0.84 | 0.84 | 0.86 | 0.88 | 0.88 |
F1-score (Healthy) | 0.53 | 0.66 | 0.67 | 0.66 | 0.69 | 0.7 | 0.69 | 0.71 | 0.71 |
F1-score (Diseased) | 0.42 | 0.5 | 0.51 | 0.49 | 0.5 | 0.5 | 0.51 | 0.52 | 0.52 |
F1-score (Total) | 0.6 | 0.71 | 0.72 | 0.7 | 0.72 | 0.73 | 0.73 | 0.75 | 0.75 |
Accuracy | 0.61 | 0.703 | 0.720 | 0.689 | 0.710 | 0.713 | 0.724 | 0.737 | 0.743 |
Testing efficiency (pixels/s) | 371 | 2281 | 4185 | 2810 | 2770 | 4215 | 4215 | 3658 | 3658 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jin, X.; Jie, L.; Wang, S.; Qi, H.J.; Li, S.W. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field. Remote Sens. 2018, 10, 395. https://doi.org/10.3390/rs10030395
Jin X, Jie L, Wang S, Qi HJ, Li SW. Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field. Remote Sensing. 2018; 10(3):395. https://doi.org/10.3390/rs10030395
Chicago/Turabian StyleJin, Xiu, Lu Jie, Shuai Wang, Hai Jun Qi, and Shao Wen Li. 2018. "Classifying Wheat Hyperspectral Pixels of Healthy Heads and Fusarium Head Blight Disease Using a Deep Neural Network in the Wild Field" Remote Sensing 10, no. 3: 395. https://doi.org/10.3390/rs10030395