Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models
<p>The proposed framework for recognizing plant leaf disease.</p> "> Figure 2
<p>Samples of plant leaf disease images under numerous health conditions in various backgrounds and having different symptoms: (<b>a</b>) Rice Sheath-rot, (<b>b</b>) Rice Tungro, (<b>c</b>) Rice Bacterial leaf-blight, (<b>d</b>) Rice Blast, (<b>e</b>) Potato Late-blight, (<b>f</b>) Pepper Bacterial-spot, (<b>g</b>) Potato Early-blight Pepper Bacterial-spot, (<b>h</b>) Grape Black-measles, (<b>i</b>) Corn Northern Leaf-blight, (<b>j</b>) Apple Black-rot, (<b>k</b>) Mango Sooty-mold, and (<b>l</b>) Cherry Powdery-mildew.</p> "> Figure 3
<p>Directional Disturbance: (<b>a</b>) Original Rice Blast image. (<b>b</b>) Rotated by 45°. (<b>c</b>) Rotated by 90°. (<b>d</b>) Rotated by 180°. (<b>e</b>) Rotated by 270°. (<b>f</b>) Horizontal mirror symmetry. (<b>g</b>) Vertical mirror symmetry.</p> "> Figure 4
<p>Illumination Disturbance: (<b>a</b>) Original Rice Blast image. (<b>b</b>) Brightened image. (<b>c</b>) Darkened image. (<b>d</b>) Less contrast image. (<b>e</b>) More contrast image. (<b>f</b>) Sharpened image. (<b>g</b>) Blur image.</p> "> Figure 5
<p>Effect of image enhancement on recognizing PLD: (<b>a</b>) rice blast disease image, and (<b>b</b>) apple black rot disease image. (<b>c</b>,<b>d</b>) are histogram of (<b>a</b>,<b>b</b>), respectively; (<b>e</b>,<b>g</b>) are the color segmentation results of (<b>a</b>,<b>b</b>), respectively, in traditional K-means clustering having extra noise without image enhancement, and (<b>f</b>,<b>h</b>) are the segmentation results of (<b>a</b>,<b>b</b>), respectively, in our modified color segmentation algorithm with image enhancement.</p> "> Figure 6
<p>The effect of our modified segmentation technique under different critical environments: (<b>a</b>–<b>e</b>) are the RGB PLD samples. (<b>f</b>–<b>j</b>) are segmented regions of interest (ROIs) of (<b>a</b>–<b>e</b>) after implementing adaptive centroid-based segmentation.</p> "> Figure 7
<p>Comparison among various convolutions.</p> "> Figure 8
<p>Primary modules for PLD recognition. (<b>a</b>) traditional convolutional layer, (<b>b</b>) quantization friendly depth-wise separable convolution, and (<b>c</b>) depth-wise separable convolution proposed in MobileNet.</p> "> Figure 9
<p>Primary module of MobileNetV2 for PLD recognition.</p> "> Figure 10
<p>(<b>a</b>) Confusion matrix for recognizing PLDs; (<b>b</b>) ROC curve of each PLD; (<b>c</b>) Accuracy curve, and (<b>d</b>) Loss curve in S-modified MobileNet-based recognition framework.</p> "> Figure 11
<p>(<b>a</b>) Confusion matrix for recognizing PLDs; (<b>b</b>) ROC curve of each PLD; (<b>c</b>) Accuracy curve, and (<b>d</b>) Loss curve in S-reduced MobileNet-based recognition framework.</p> "> Figure 12
<p>(<b>a</b>) Confusion matrix for recognizing PLDs; (<b>b</b>) ROC curve of each PLD; (<b>c</b>) Accuracy curve, and (<b>d</b>) Loss curve in S-extended MobileNet-based recognition framework.</p> "> Figure 13
<p>Processing steps of depth-wise separable convolutional PLD (DSCPLD) recognition framework using S-modified MobileNet: (<b>a</b>) Original Rice Blast image. (<b>b</b>) Segmented image after applying adaptive centroid-based segmentation (ACS). (<b>c</b>) Activations on the first CONV layer. (<b>d</b>) Activations on the first ReLU layer. (<b>e</b>) Activations on the first Max-pooling layer. (<b>f</b>) Activations on the first separable CONV layer. (<b>g</b>) Activations on the second separable CONV layer. (<b>h</b>) Activations on the second Max-pooling layer. (<b>i</b>) Activations on the second ReLU layer. (<b>j</b>) Activations on the third separable CONV layer. (<b>k</b>) Activations on the fourth separable CONV layer. (<b>l</b>) Activations on the third Max-pooling layer. (<b>m</b>) Activations on the third ReLU layer. (<b>n</b>) Activations on the fifth separable CONV layer. (<b>o</b>) Activations on the sixth separable CONV layer. (<b>p</b>) Activations on the fourth Max-pooling layer. (<b>q</b>) Activations on the fourth ReLU layer. and (<b>r</b>) Predicted result.</p> "> Figure 14
<p>(<b>a</b>) Confusion matrix for recognizing PLDs and (<b>b</b>) ROC curve of each PLD in F-modified MobileNet-based recognition framework.</p> ">
Abstract
:1. Introduction
- (i)
- a new dataset is introduced, including the diversified backgrounds of PLD images. PLD images are investigated under both direction and illumination-based augmentations to recognize the PLDs in natural circumstances.
- (ii)
- introduce a modified segmentation technique that can trace the accurate ROI irrespective of diversified backgrounds, under uneven illuminations and orientations. This phenomenon increases the sustainability of our DSCPLD recognition framework. Moreover, it also decreases the possibility of a fall in accuracy for testing an independent dataset.
- (iii)
- various modified and reduced DSC-based architectures are developed using segmented images and full PLD images to establish a concrete trade-off among accuracy, parameter size, and computation latency for mobile and IoT-based PLD recognition.
2. Literature Review
- (i)
- diversified data with heterogeneous backgrounds, such as natural, complex, and under uncontrolled capture conditions.
- (ii)
- more accurate identification due to similar symptoms in various plant diseases.
- (iii)
- drastically fall in accuracy.
- (iv)
- disease phases identification due to symptom changes.
- (i)
- sustainable accuracy. To do so:
- use diversified data with heterogeneous backgrounds, such as natural, complex backgrounds, and under uncontrolled capture conditions.
- use segmentation phase to eradicate unnecessary noises.
- test on a dataset that is not part of a train set.
- (ii)
- investigates memory requirements and computational latency to integrate our model into mobile.
3. Materials and Proposed Method
3.1. Dataset
3.1.1. Adding Direction Disturbance to Dataset
3.1.2. Adding Lighting Disturbance to Dataset
3.2. Enhancing Image Using Statistical Features
3.3. Clustering by Adaptive Centroid-Based Segmentation
3.4. Recognition by DSCPLD Models
3.4.1. Depth-wise Separable Convolution
3.4.2. Basic Depth-wise Separable Convolution Modules
3.4.3. Model Design and Tuning
4. Experimental Result and Observation
4.1. Hardware Requirements
4.2. Dataset Collection
4.3. Performance Evaluation of Our DSCPLD Frameworks Based on Mean Accuracy and Mean F1-Score Using Segmented Images
4.4. Performance Evaluation of Our DSCPLD Frameworks Using Segmented Images Based on Model Size and Computational Latency
4.5. Selection of the Best DSCPLD Framework Based on All Criteria
4.6. Processing Steps Using Our DSCPLD Framework
- effectiveness of our segmentation technique in a complex situation.
- accurate recognition in natural background.
4.7. Performance Evaluation of Our PLD Frameworks Using Segmented Images and Full Leaf Images
4.8. Performance Evaluation of Our PLD Frameworks Using Various Parameters on MobileNetV3
4.9. Evaluation of Generalization for Our DSCPLD Framework
4.10. Comparison among Some Benchmark PLD Recognition Frameworks
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CNN | convolutional neural network |
PLD | plant leaf disease |
DSCPLD | depth-wise separable convolution-based PLD |
ACS | modified adaptive centroid-based segmentation |
Faster R-CNN with TDM | faster R-CNN with top down modulation |
Faster R-CNN with FPN | faster R-CNN with feature pyramid network |
GAN | generative adversarial network |
R | resolved |
PR | partially resolved |
NR | not resolved |
S-modified MobileNet | modified MobileNet using segmented leaf images |
S-reduced MobileNet | reduced MobileNet using segmented leaf images |
S-extended MobileNet | extended MobileNet using segmented leaf images |
F-modified MobileNet | modified MobileNet using full leaf images |
F-reduced MobileNet | reduced MobileNet using full leaf images |
F-extended MobileNet | extended MobileNet using full leaf images |
BPNN | backpropagation neural network |
SVM | support vector machine |
DFTF | dense scale-invariant feature transform features |
BOVW | bag of visual words |
MLP | multi-layer perceptron |
HLBP | histogram-based local binary pattern |
HaarWT | haar wavelet transformation |
RF | random forest |
LR | logistic regression |
References
- Savary, S.; Ficke, A.; Aubertot, J.N.; Hollier, C. Crop losses due to diseases and their implications for global food production losses and food security. Food Secur. 2012, 4, 519–537. [Google Scholar] [CrossRef]
- Li, J.; Tang, Y.; Zou, X.; Lin, G.; Wang, H. Detection of Fruit-Bearing Branches and Localization of Litchi Clusters for Vision-Based Harvesting Robots. IEEE Access 2020, 8, 117746–117758. [Google Scholar] [CrossRef]
- Chen, M.; Tang, Y.; Zou, X.; Huang, K.; Huang, Z.; Zhou, H.; Wang, C.; Lian, G. Three-dimensional perception of orchard banana central stock enhanced by adaptive multi-vision technology. Comput. Electron. Agric. 2020, 174, 105508. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 2018, 172, 84–91. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 1–7, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Brahimi, M.; Mahmoudi, S.; Boukhalfa, K.; Moussaoui, A. Deep interpretable architecture for plant diseases classification. In Proceedings of the Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 18–20 September 2019; pp. 111–116. [Google Scholar]
- Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
- Liang, W.J.; Zhang, H.; Zhang, G.F.; Cao, H.X. Rice blast disease recognition using a deep convolutional neural network. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Amara, J.; Bouaziz, B.; Algergawy, A. A Deep Learning-based Approach for Banana Leaf Diseases Classification. In Datenbanksysteme für Business, Technologie und Web (BTW 2017)-Workshopband; Mitschang, B., Nicklas, D., Leymann, F., Schöning, H., Herschel, M., Teubner, J., Härder, T., Kopp, O., Wieland, M., Eds.; Gesellschaft für Informatik e.V.: Bonn, Germany, 2017; pp. 79–88. [Google Scholar]
- Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Apon, S.H.; Nowrin, F.; Wasif, A. Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef] [Green Version]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Boulent, J.; Foucher, S.; Théau, J.; St-Charles, P.L. Convolutional neural networks for the automatic identification of plant diseases. Front. Plant Sci. 2019, 10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Liu, B.; Zhang, Y.; He, D.; Li, Y. Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks. Symmetry 2018, 10, 11. [Google Scholar] [CrossRef] [Green Version]
- Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
- Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Inf. Process. Agric. 2020, 7, 566–574. [Google Scholar] [CrossRef]
- Chen, J.; Liu, Q.; Gao, L. Visual Tea Leaf Disease Recognition Using a Convolutional Neural Network Model. Symmetry 2019, 11, 343. [Google Scholar] [CrossRef] [Green Version]
- Patidar, S.; Pandey, A.; Shirish, B.A.; Sriram, A. Rice Plant Disease Detection and Classification Using Deep Residual Learning. In International Conference on Machine Learning, Image Processing, Network Security and Data Sciences; Springer: Singapore, 2020; pp. 278–293. [Google Scholar]
- A review on the main challenges in automatic plant disease identification based on visible range images. Biosyst. Eng. 2016, 144, 52–60. [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sheng, T.; Feng, C.; Zhuo, S.; Zhang, X.; Shen, L.; Aleksic, M. A Quantization-Friendly Separable Convolution for MobileNets. In Proceedings of the 1st Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC2), Williamsburg, VA, USA, 25 March 2018. [Google Scholar] [CrossRef] [Green Version]
- Brahimi, M.; Arsenovic, M.; Laraba, S.; Sladojevic, S.; Kamel, B.; Moussaoui, A. Deep Learning for Plant Diseases: Detection and Saliency Map Visualisation. In Human and Machine Learning; Springer: Cham, Switzerland, 2018. [Google Scholar]
- Kaur, S.; Pandey, S.; Goel, S. Plants Disease Identification and Classification Through Leaf Images: A Survey. Arch. Comput. Methods Eng. 2019, 26, 507–530. [Google Scholar] [CrossRef]
- Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Barbedo, J.G.A. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
- Lu, Y.; Yi, S.; Zeng, N.; Liu, Y.; Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing 2017, 267, 378–384. [Google Scholar] [CrossRef]
- Qi, H.; Liang, Y.; Ding, Q.; Zou, J. Automatic Identification of Peanut-Leaf Diseases Based on Stack Ensemble. Appl. Sci. 2021, 11, 1950. [Google Scholar] [CrossRef]
- PlantVillage. Available online: https://www.kaggle.com/emmarex/plantdisease (accessed on 17 February 2021).
- Rice Disease Image Dataset. Available online: https://www.kaggle.com/minhhuy2810/rice-diseases-image-dataset (accessed on 17 February 2021).
- Rice Knowledge Bank. Available online: https://www.irri.org (accessed on 17 February 2021).
- Bangladesh Rice Knowledge Bank. Available online: http://knowledgebank-brri.org (accessed on 17 February 2021).
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Tan, B.C.M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; Le, Q.V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019. [Google Scholar]
- Calculation MACC CNN Layers and in the Calculation FLOPs. Available online: https://www.programmersought.com/article/27982165768 (accessed on 17 February 2021).
References | Data Collected from | Classes/Species | Number of Images | Data Augmentation | CNN Architecture | Accuracy |
---|---|---|---|---|---|---|
[6] | PlantVillage | 58/25 | 54,309 | Yes | VGG | 99.53% |
[7] | PlantVillage | 38/14 | 54,306 | Yes | GoogleNet | 99.35% |
[8] | Collected | 15/6 | 4483 | Yes | Modified CaffeNet | 96.30% |
[10] | PlantVillage | 38/14 | 54,305 | Yes | DenseNet121 | 99.75% |
[11] | Collected | 2/1 | 5808 | Yes | Custom | 95.83% |
[13] | PlantVillage | 3/1 | 3700 | Yes | Modified LeNet | 92.88% |
[14] | Collected | 9/1 | 1426 | Yes | Two stage CNN | 93.3% |
[18] | Collected | 4/1 | 1053 | Yes | Modified AlexNet | 97.62% |
[19] | PlantVillage, Collected | 42/12 | 79,265 | Yes | ResNet152 | 90.88% |
[20] | PlantVillage, Collected | 10/1 | 17929 | N/A | F-CNN, S-CNN | 98.6% |
[21] | Collected | 7/1 | 7905 | Yes | Custom | 90.16% |
[26] | PlantVillage | 38/14 | 54,323 | Yes | InceptionV3 | 99.76% |
[28] | Collected | 9/1 | 5000 | Yes | R-FCNN, ResNet50 | 85.98% |
[29] | Collected | 56/14 | 1567 | Yes | GoogleNet | 94% |
[30] | Collected | 10/1 | 500 | No | Custom | 95.48% |
[31] | Collected | 6/1 | 6029 | Yes | DenseNet+RF | 97.59% |
References | Fall in Accuracy | Complex Background | Multiple Diseases in a Sample | Train and Test Data from Same Dataset | Computational Complexity | Memory Restrictions |
---|---|---|---|---|---|---|
[6] | NR | NR | PR | NR | NR | NR |
[7] | NR | NR | NR | NR | NR | NR |
[8] | NR | R | R | NR | NR | NR |
[10] | NR | NR | NR | NR | NR | NR |
[11] | NR | NR | NR | NR | NR | NR |
[13] | NR | NR | NR | NR | NR | NR |
[14] | NR | PR | NR | NR | NR | R |
[18] | R | NR | NR | NR | NR | NR |
[19] | R | R | R | R | R | NR |
[20] | R | R | R | R | NR | NR |
[21] | NR | NR | NR | NR | NR | NR |
[26] | NR | PR | NR | NR | NR | NR |
[28] | PR | R | R | NR | NR | NR |
[29] | R | PR | NR | NR | NR | NR |
[30] | NR | PR | NR | NR | NR | NR |
[31] | NR | R | PR | NR | NR | NR |
Disease Class | #Org. Images | Distribution Techniques | ||
---|---|---|---|---|
Train | Validation | Test | ||
Corn_northern_blight | 800 | 560 | 160 | 80 |
Pepper_bacterial_spot | 800 | 560 | 160 | 80 |
Grape_black_measles | 540 | 378 | 108 | 54 |
Rice_blast | 840 | 588 | 168 | 84 |
Rice_bacterial_leaf_blight | 950 | 665 | 190 | 95 |
Rice_sheath_rot | 400 | 280 | 80 | 40 |
Rice_Tugro | 250 | 175 | 50 | 25 |
Potato_early_blight | 820 | 574 | 164 | 82 |
Potato_late_blight | 310 | 217 | 62 | 31 |
Apple_black_rot | 210 | 147 | 42 | 21 |
Mango_sooty_mold | 310 | 217 | 62 | 31 |
Cherry_powdery_mildew | 350 | 245 | 70 | 35 |
Total | 6580 | 4606 | 1316 | 658 |
Function | Filter/Pool | #Filters | Output | #Parameters |
---|---|---|---|---|
Input | - | - | 0 | |
Convolution | 32 | 896 | ||
Max pooling | - | 0 | ||
Separable Convolution | 64 | 2400 | ||
Separable Convolution | 64 | 4736 | ||
Max pooling | - | 0 | ||
Separable Convolution | 128 | 8896 | ||
Separable Convolution | 128 | 17,664 | ||
Max pooling | - | 0 | ||
Separable Convolution | 256 | 34,176 | ||
Separable Convolution | 256 | 68,096 | ||
Max pooling | - | 0 | ||
Global Average Pooling | - | - | 0 | |
Dense | - | - | 263,168 | |
Dense | - | - | 12,300 | |
Softmax | - | - | 0 |
Function | Filter/Pool | #Filters | Output | #Parameters |
---|---|---|---|---|
Input | - | - | 0 | |
Convolution | 32 | 896 | ||
Depth-wise Convolution | 32 | 32,800 | ||
Point-wise Convolution | 64 | 2112 | ||
Depth-wise Convolution | 64 | 262,208 | ||
Point-wise Convolution | 128 | 8320 | ||
Global Average Pooling | - | - | 0 | |
Dense | - | - | 1548 | |
Softmax | - | - | 0 |
Function | Filter/Pool | #Filters | Output | #Parameters |
---|---|---|---|---|
Input | - | - | 0 | |
Convolution | 32 | 896 | ||
Depth-wise Convolution | 32 | 32,800 | ||
Point-wise Convolution | 64 | 2112 | ||
Depth-wise Convolution | 64 | 262,208 | ||
Point-wise Convolution | 128 | 8320 | ||
Max pooling | - | 0 | ||
Dense | - | - | 5,25,312 | |
Dense | - | - | 12,300 | |
Softmax | - | - | 0 |
Hyper-Parameters | SGD | Adam | RMSprop |
---|---|---|---|
Epochs | 50–150 | 50–150 | 50–150 |
Batch size | 32, 64 | 32, 64 | 32, 64 |
Learning rate | 0.001 | 0.001, 0.0001 | 0.0001 |
- | 0.9 | - | |
- | 0.999 | - | |
Momentum | 0.8, 0.9 | - | - |
Sources | Species | Diseases | No. of Training Images | No. of Validation Images | No. of Test Images | No. of Training Images (Source-Wise) | No. of Validation Images (Source-Wise) | No. of Test Images (Source-Wise) |
---|---|---|---|---|---|---|---|---|
PlantVillage | pepper | Bacterial-spot | 560 | 160 | 80 | 2898 | 1459 | 414 |
Potato | Early-blight | 574 | 164 | 82 | ||||
Late-blight | 217 | 62 | 31 | |||||
Corn | Northern-blight | 560 | 160 | 80 | ||||
Mango | Sooty-mold | 217 | 62 | 31 | ||||
Apple | Black-rot | 147 | 42 | 21 | ||||
Cherry | Powdery-mildew | 245 | 70 | 35 | ||||
Grape | Black-measles | 378 | 108 | 54 | ||||
Kaggle | Rice | Blast | 588 | 168 | 84 | 1253 | 358 | 179 |
Bacterial leaf-blight | 665 | 190 | 95 | |||||
IRRI/BRRI/ other sources | Rice | Sheath-rot | 280 | 80 | 40 | 455 | 130 | 65 |
Tungro | 175 | 50 | 25 | |||||
Total images | 4606 | 1316 | 658 |
Models | Training Accuracy | Validation Accuracy | Mean Test Accuracy | Mean F1-Score |
---|---|---|---|---|
VGG16 | 99.91% | 99.53% | 99.21% | 96.74% |
VGG19 | 99.93% | 99.53% | 99.39% | 96.91% |
AlexNet | 99.07% | 98.82% | 98.78% | 96.31% |
MobileNetV1 | 99.93% | 99.41% | 99.24% | 95.67% |
MobileNetV2 | 99.96% | 99.82% | 99.41% | 96.07% |
MobileNetV3 | 100% | 99.89% | 99.55% | 96.97% |
S-extended MobileNet | 99.78% | 99.31% | 98.37% | 95.92% |
S-reduced MobileNet | 99.93% | 99.70% | 99.41% | 96.93% |
S-modified MobileNet | 100% | 99.70% | 99.55% | 97.07% |
Models | Image Size | FLOPs | MACC | # Parameters |
---|---|---|---|---|
VGG16 | 213.5 M | 106.75 M | 15.2 M | |
VGG19 | 287.84 M | 143.92 M | 20.6 M | |
AlexNet | 127.68 M | 63.84 M | 6.4 M | |
MobileNetV1 | 83.87 M | 41.93 M | 3.2 M | |
MobileNetV2 | 81.91 M | 40.96 M | 1.61 M | |
MobileNetV3 | 59.8 M | 29.90 M | 3.2 M | |
S-extended MobileNet | 16.86 M | 8.43 M | 0.84 M | |
S-reduced MobileNet | 3.70 M | 2.15 M | 0.31 M | |
S-modified MobileNet | 5.78 M | 2.89 M | 0.41 M |
Models | Training Accuracy | Validation Accuracy | Mean Test Accuracy | Mean F1-Score |
---|---|---|---|---|
VGG16 | 99.78% | 99.39% | 98.78% | 96.32% |
VGG19 | 99.78% | 99.41% | 99.01% | 96.54% |
AlexNet | 98.71% | 98.64% | 98.34% | 95.89% |
MobileNetV1 | 99.81% | 99.43% | 98.79% | 96.54% |
MobileNetV2 | 99.89% | 99.53% | 98.99% | 96.56% |
MobileNetV3 | 99.91% | 99.53% | 99.05% | 96.58% |
F-extended MobileNet | 99.58% | 99.21% | 98.14% | 95.22% |
F-reduced MobileNet | 99.91% | 99.58% | 99.07% | 96.60% |
F-modified MobileNet | 99.91% | 99.63% | 99.10% | 96.63% |
Class | S-modified MobileNet | F-modified MobileNet | ||
---|---|---|---|---|
Accuracy (%) | F1-Score (%) | Accuracy (%) | F1-Score (%) | |
Corn_northern_blight | 99.08 | 96.34 | 98.18 | 92.77 |
Pepper_bacterial_spot | 99.85 | 99.37 | 99.39 | 97.50 |
Grape_black_measles | 99.85 | 99.08 | 99.39 | 96.30 |
Rice_blast | 99.54 | 98.24 | 99.24 | 96.93 |
Potato_early_blight | 100 | 100 | 99.70 | 98.87 |
Apple_black_rot | 99.08 | 84.24 | 98.63 | 80 |
Mango_sooty_mold | 99.85 | 98.36 | 99.39 | 93.75 |
Cherry_powdery_mildew | 99.54 | 95.52 | 98.78 | 87.88 |
Rice_bacterial_leaf_blight | 99.85 | 99.45 | 99.85 | 99.45 |
Potato_late_blight | 99.85 | 98.41 | 99.24 | 91.80 |
Rice_sheath_rot | 99.85 | 98.76 | 98.94 | 91.02 |
Rice_Tugro | 100 | 100 | 99.85 | 97.96 |
Total | 99.55 | 97.07 | 99.10 | 96.63 |
Class | S-reduced MobileNet | F-reduced MobileNet | ||
---|---|---|---|---|
Accuracy (%) | F1-Score (%) | Accuracy (%) | F1-Score (%) | |
Corn_northern_blight | 98.63 | 94.54 | 98.18 | 92.77 |
Pepper_bacterial_spot | 99.08 | 99.38 | 99.39 | 97.50 |
Grape_black_measles | 99.54 | 98.15 | 99.39 | 96.30 |
Rice_blast | 99.54 | 98.18 | 99.24 | 96.93 |
Potato_early_blight | 99.85 | 99.40 | 99.70 | 98.87 |
Apple_black_rot | 98.94 | 83.72 | 98.63 | 80 |
Mango_sooty_mold | 99.85 | 98.36 | 99.39 | 93.75 |
Cherry_powdery_mildew | 98.63 | 97.07 | 98.78 | 97.88 |
Rice_bacterial_leaf_blight | 99.85 | 99.48 | 99.85 | 99.45 |
Potato_late_blight | 99.54 | 96.88 | 99.24 | 91.80 |
Rice_sheath_rot | 98.93 | 93.33 | 97.85 | 98.05 |
Rice_Tugro | 100 | 100 | 99.85 | 97.96 |
Total | 99.41 | 96.93 | 99.07 | 96.60 |
Class | S-extended MobileNet | F-extended MobileNet | ||
---|---|---|---|---|
Accuracy (%) | F1-Score (%) | Accuracy (%) | F1-Score (%) | |
Corn_northern_blight | 97.87 | 91.36 | 97.18 | 90.67 |
Pepper_bacterial_spot | 99.08 | 96.25 | 98.79 | 97.50 |
Grape_black_measles | 99.54 | 97.25 | 99.39 | 96.30 |
Rice_blast | 99.39 | 97.62 | 99.24 | 96.93 |
Potato_early_blight | 100 | 100 | 99.70 | 98.87 |
Apple_black_rot | 98.48 | 73.06 | 97.03 | 70.03 |
Mango_sooty_mold | 99.39 | 95.24 | 99.39 | 93.75 |
Cherry_powdery_mildew | 98.63 | 86.96 | 97.78 | 87.88 |
Rice_bacterial_leaf_blight | 99.85 | 99.47 | 99.85 | 99.45 |
Potato_late_blight | 99.54 | 95.24 | 99.24 | 90.67 |
Rice_sheath_rot | 98.93 | 90.67 | 97.85 | 88.05 |
Rice_Tugro | 99.84 | 97.96 | 99.85 | 97.96 |
Total | 98.37 | 95.92 | 98.14 | 95.22 |
Models | Mean Test Accuracy | Mean F1-Score | FLOPs | MACC | # Parameters |
---|---|---|---|---|---|
0.25 MobileNetV3-224 | 95.48% | 93.39% | 4.30 M | 2.15 M | 0.38 M |
0.5 MobileNetV3-224 | 97.78% | 95.01% | 15.66 M | 7.83 M | 0.99 M |
0.75 MobileNetV3-224 | 98.81% | 95.64% | 34.16 M | 17.08 M | 1.98 M |
1.0 MobileNetV3-224 | 99.55% | 96.97% | 59.8 M | 29.90 M | 3.2 M |
Models | Mean Test Accuracy | Mean F1-Score | FLOPs | MACC | # Parameters |
---|---|---|---|---|---|
1.0 MobileNetV3-128 | 96.88% | 95.39% | 19.55 M | 9.77 M | 3.2 M |
1.0 MobileNetV3-160 | 99.08% | 95.78% | 30.48 M | 15.24 M | 3.2 M |
1.0 MobileNetV3-192 | 99.31% | 96.64% | 43.93 M | 21.97 M | 3.2 M |
1.0 MobileNetV3-224 | 99.55% | 96.97% | 59.8 M | 29.90 M | 3.2 M |
Dataset | SGD | Adam | RMSprop |
---|---|---|---|
Rice dataset | 98.25% | 97.05% | 98.53% |
Our PLD dataset | 99.31% | 99.39% | 99.55% |
Dataset | SGD | Adam | RMSprop |
---|---|---|---|
Rice dataset | 90.65% | 92.25% | 95.53% |
Our PLD dataset | 98.39% | 98.53% | 99.10% |
References | Classes/Species | CNN Architecture | Fall in Accuracy | Computational Complexity | Memory Restriction | Accuracy |
---|---|---|---|---|---|---|
[6] | 58/25 | VGG | NR | NR | NR | 99.53% |
[7] | 38/14 | GoogleNet | NR | NR | NR | 99.35% |
[8] | 15/6 | Modified CaffeNet | NR | NR | NR | 96.30% |
[11] | 2/1 | Custom | NR | NR | NR | 95.83% |
[13] | 3/1 | Modified LeNet | NR | NR | NR | 92.88% |
[14] | 9/1 | Two stage CNN | NR | NR | R | 93.3% |
[18] | 4/1 | Modified AlexNet | R | NR | NR | 97.62% |
[19] | 42/12 | ResNet152 | R | R | NR | 90.88% |
[20] | 10/1 | F-CNN, S-CNN | R | NR | NR | 98.6% |
[21] | 7/1 | Custom | NR | NR | NR | 90.16% |
[28] | 9/1 | R-FCNN, ResNet50 | PR | NR | NR | 85.98% |
[29] | 56/14 | GoogleNet | R | NR | NR | 94% |
[30] | 10/1 | Custom | NR | NR | NR | 95.48% |
[31] | 6/1 | DenseNet+RF | NR | NR | NR | 97.59% |
Our work | 12/8 | S-modified MobileNet | R | R | R | 99.55% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hossain, S.M.M.; Deb, K.; Dhar, P.K.; Koshiba, T. Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models. Symmetry 2021, 13, 511. https://doi.org/10.3390/sym13030511
Hossain SMM, Deb K, Dhar PK, Koshiba T. Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models. Symmetry. 2021; 13(3):511. https://doi.org/10.3390/sym13030511
Chicago/Turabian StyleHossain, Syed Mohammad Minhaz, Kaushik Deb, Pranab Kumar Dhar, and Takeshi Koshiba. 2021. "Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models" Symmetry 13, no. 3: 511. https://doi.org/10.3390/sym13030511