CN113112498B - Grape leaf spot identification method based on fine-grained countermeasure generation network - Google Patents
Grape leaf spot identification method based on fine-grained countermeasure generation network Download PDFInfo
- Publication number
- CN113112498B CN113112498B CN202110488184.0A CN202110488184A CN113112498B CN 113112498 B CN113112498 B CN 113112498B CN 202110488184 A CN202110488184 A CN 202110488184A CN 113112498 B CN113112498 B CN 113112498B
- Authority
- CN
- China
- Prior art keywords
- image
- background
- spot
- disease
- generation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the field of artificial intelligence and plant protection, and relates to interdisciplinary cross application of the artificial intelligence and plant protection disciplines, in particular to a grape leaf spot identification method based on a fine-grained countermeasure generation network, which comprises the following steps: the method comprises the steps of data acquisition and labeling, significant lesion area detection and segmentation, fine-granularity countermeasure generation network image enhancement, training of a deep learning classification model, and grape leaf lesion recognition by using the trained model. The method mainly solves the problems of low leaf spot recognition rate in the early disease stage, the novel disease, the rare disease spots or the insufficient training sample number of the disease spots of the grape leaf, is mainly used for recognizing the disease spots in the early disease spots of the grape leaf, can make corresponding intervention measures as soon as possible, lays a foundation for the accurate drug application in the next step, furthest reduces economic loss, can reduce the dosage and protects the environment. The invention can also be extended to other plant leaf disease forms as lesions.
Description
Technical Field
The invention relates to the field related to artificial intelligence and plant protection, in particular to a grape leaf spot identification method based on a Fine-Grained challenge-generation network (Fine-Grained-GAN).
Background
The common diseases of the grapes mainly comprise Black rot (Black rot), leaf blight (Leaf blast), black measles (Black measles) and the like, and the expression forms can show that the few grape diseases mainly comprise innumerable disease spots on the leaves, early characteristics of the grape diseases are found in time, and corresponding intervention is performed, so that important significance is brought to controlling spread of the grape diseases, corresponding intervention measures are made as soon as possible, a foundation is laid for accurate application of the next step, economic loss is reduced to the greatest extent, the dosage is reduced, and the environment is protected. Since early diseases are not easily perceived, especially for rare diseases, or novel diseases, the most obvious feature of the situation is insufficient training data and lack of a recognition model trained in advance. The common method for solving the problem is to perform data enhancement, and train the enhanced data as a training set, so that the characteristic expression capacity and generalization capacity of the model are improved. However, the existing data enhancement method generally aims at the whole input image, and as the number of the lesions is small in the early stage of the disease, the data enhancement operation is directly carried out on the whole image to weaken the information of the lesions, and the background interference information is further increased, so that the identification of the lesions of the grape leaves is not facilitated. The invention provides a grape leaf spot recognition method based on a fine-granularity countermeasure generation network, which can realize local image data enhancement aiming at a significant spot region so as to highlight the spot characteristics, generate a subgraph with obvious spot characteristics, and input the subgraph into a deep learning model for training so as to enhance the generalization capability of the model and effectively improve the accuracy of grape leaf spot recognition. The method mainly solves the problems that the disease of the grape leaf is in the form of a disease spot, the disease is early, the disease is a novel leaf spot disease, the disease spot is rare, and the identification rate of the leaf spot is low under the condition of insufficient training samples.
Analysis of research status at home and abroad
Bharate A.A et al (2016) divide grape leaves into healthy and unhealthy by using a computer vision method, and the accuracy of identification by the two algorithms is 90% and 96.6% respectively by extracting characteristics such as textures, colors and the like of the grape leaves and inputting the extracted characteristics into a K nearest neighbor and a support vector machine for classification calculation. Zhu, JH. et al (2020) propose an automatic diagnosis method for grape leaves based on image analysis and BP neural network, which adopts wiener filtering (wiener filtering) and Ostu (optimal threshold segmentation) methods to diagnose grape leaves and segment the lesions, adopts a morphological method to improve the shapes of the lesions, then utilizes a Prewitt operator to extract the complete edges of the lesions areas, finally obtains five characteristics of the shape complexity, the circularity, the perimeter, the rectangularity and the area of the lesions, and inputs the extracted characteristics into the BP neural network to identify the lesions, thereby having higher accuracy. Jaisakthi s.m. et al (2019) propose an automatic detection system for grape leaf diseases based on image processing and machine learning techniques, which uses the GrabCut algorithm to identify the region of interest of the grape leaf from the background image, and then uses a global threshold and semi-supervised method to segment the lesion region. And finally, selecting a support vector machine, an adaboost and a random forest tree to perform feature extraction, and dividing the grape leaf blight into four types of leaf height, rot, esca and health diseases. Experimental results show that the support vector machine classifier achieves 93% of optimal performance. Xie, XY. et al (2020) propose a grape leaf disease real-time detection system based on an improved deep convolutional neural network. Firstly, an image processing technology is utilized to expand a grape leaf disease image, a grape leaf disease data set is constructed, an acceptance-v 1 model is introduced to detect the grape leaf disease on the basis of extracting high-level semantic features of the grape leaf disease image, and experimental verification is carried out on a GLDD data set to obtain the accuracy of 81.1%. Liu, b.et al (2020) propose a method for generating images of four different grape leaf diseases based on generating a model against a network for training the recognition model leaf-GAN. Firstly, designing a generation model with discrete channels, then introducing a dense communication network and an example normalization method to identify true and false disease images, and finally, adopting a deep regret gradient penalty (gradient punishment mechanism) method to ensure the stability of the training process. Based on 4062 grape leaf disease images, 8124 images are generated. Experimental results show that the recognition accuracy of the model to the test data based on the X-reception model reaches 98.70%. Liu, XD. et al (2021) propose a plant disease identification method based on a 220592 image dataset containing 271 plant categories, in order to demonstrate the identification level of each patch, authors first divided the patch of each image according to a cluster distribution and calculated its weight, then introduced weak supervision training, assigned weights for each loss of each patch tag pair, finally extracted patch features from the trained network, and coded the weighted feature sequence into a comprehensive feature representation using LSTM. Experimental results show that the method achieves better performance.
In the patent search column of the national intellectual property office, the keyword of 'grape leaf spot recognition' is searched, and no patent literature relevant to the identification of the grape leaf spot is searched; the keyword "leaf spot identification" is searched, and 2 relevant patent documents are searched in total, and the following are respectively:
patent document CN109165623A (application number: CN 201811043736.1) discloses a rice disease spot detection method and system based on deep learning, wherein the patent is used for improving the accuracy of rice disease identification by collecting data and manually marking, and inputting a Linknet model of a Pytorch deep learning framework for training. This document takes into consideration deeply the early onset of lesions and the limited training samples, which are one of the main problems to be solved by the present invention.
Patent document CN102214306B (application number: CN 201110162727.6) discloses a leaf spot recognition method and device. The main method of the document is that images with leaf spot information are acquired, the images are analyzed through an image processing method, the background is simplified, the color channel characteristics of the images are extracted and optimized, and then the optimized characteristic space is clustered through a K center point algorithm, so that parts except for the leaf spots are removed, and the optimal leaf spot recognition is realized. The patent document has good effect on the condition of less lesion number, but has larger difference in the field of large-face value recognition compared with the method provided by the invention because the recognition accuracy of the traditional K center point algorithm has obvious difference relative to the deep learning algorithm and a more complex image preprocessing stage is needed.
Problems with the presently disclosed methods
The method for identifying the grape leaf diseases disclosed above achieves good effects in the specific field. There are problems, however, mainly the following:
(1) The data set required for grape leaf spot detection is lacking.
(2) The existing methods are mainly researched for common grape diseases which occur on a large scale, and few methods for early diseases or novel/rare disease identification of grape leaves are involved.
(3) If rare diseases or novel diseases occur in an actual planting environment, data acquisition is very troublesome in the situation, and under the condition that a sufficient training set is not available, the deep learning model often has the problem of fitting, so that high recognition accuracy cannot be guaranteed.
Problems to be solved by the invention
Aiming at the defects, the invention mainly improves the following steps:
(1) And establishing a grape leaf spot disease early-stage disease data set, and manually marking the collected leaf spots to form the grape leaf spot disease data set.
(2) The invention discloses a blade lesion local area segmentation method based on improved Faster R-CNN, which can automatically identify and segment out the most representative obvious lesion area subgraph in a blade.
(3) The invention discloses a grape leaf spot identification method based on a fine-grained countermeasure generation network, which can improve the identification rate of leaf spots under a limited training sample.
The search documents are given below
[1]Bharate A.A.,Shirdhonkar M.S.Classification of Grape Leaves using KNN and SVM Classifiers.2020 Fourth International Conference on Computing Methodologies and Communication(ICCMC),Erode,India.2020,pp.745-749,http:// doi.org/10.1109/ICCMC48092.2020.ICCMC-000139
[2]Zhu,JH.,wu,A.,Wang,XS.,Zhang,H.Identification of grape diseases using image analysis and BP neural networks.Multimedia Tools and Applications.2020(79).21-22:14539-14551.http://doi.org/10.1007/s11042-018-7092-0
[3]Jaisakthi S.M.,Mirunalini P.,τhenmozhi D.,Vatsala.Grape Leaf Disease Identification using Machine Learning Techniques.2019 International Conference on Computational Intelligence in Data Science(ICCIDS),Chennai,India,2019,pp.1-6,http://doi.org/10.1109/ICCIDS.2019.8862084
[4]Xie,XY.,Ma,Y.,Liu,B.,He,JR.,Li,SQ.,Wang,HY.A Deep-Learning-Based Real-Time Detector for Grape Leaf Diseases Using Improved Convolutional Neural Networks.Frontiers in Plant Science.2020,Vol(11).http://doi.org/10.3389/fpls.2020.00751
[5]Liu,B.,Tan,C.,Li,SQ.,He,JR.,Wang,HY.A Data Augmentation Method Based on Generative Adversarial Networks for Grape Leaf Disease Identification.IEEE Access.2020.Vol(8):http://doi.org/102188-102198.10.1109/ACCESS.2020.2998839
[6]Liu,XD.,Min,WQ.,Mei,SH.,Wang,LL.,Jiang,SQ.Plant Disease Recognition:A Large-Scale Benchmark Dataset and a Visual Region and Loss Reweighting Approach.IEEE Transactions on Image Processing.2021,Vol(30):2003-2015.http://doi.org/10.1109/TIP.2021.3049334
Disclosure of Invention
Aiming at the characteristics of early disease spots of the grape leaf disease, the invention provides a grape leaf disease spot identification method based on a fine-granularity countermeasure generation network, which can improve the accuracy of identifying the grape leaf disease spot under the condition of a limited training sample, and has higher identification rate especially for early disease spots or rare diseases. Referring to fig. 1, the complete data flow of the method used in the present invention is shown as follows:
the main content can be divided into two phases:
1. and detecting and dividing the local lesion area. With the improved Faster R-CNN target detection algorithm, the most representative significant plaque area in the leaf is selected and segmented for data enhancement input into the Fine graded-GAN network, please refer to FIG. 2. Compared with the traditional Faster R-CNN target detection algorithm, the method mainly has the following improvement:
(1) The detection frames of the traditional Faster R-CNN target detection algorithm are different in size and can be adjusted according to the detected target size, but each blade only outputs the most representative disease spot detection frame, so that the detection frame is convenient for deep learning model calculation, and the detection frame with the obvious disease spots is output, wherein the unified pixels of the detection frame are 64 x 64 in size.
(2) Significant plaque area detection box selection strategy: selecting the largest disease spot in the same blade as a salient region, and outputting the detection frame with the largest disease spot if the disease spot region is larger than the detection frame; if the lesion area is smaller than the detection frame, outputting the detection frame containing the largest lesion number.
(3) If the detection frame exceeds the edge range of the blade, the position of the detection frame is adjusted so that the detection frame only contains the disease spots and the blade information.
2. Image enhancement phase based on local area. The method automatically divides the subgraph generated in the previous step into a foreground disease spot and a background image, selects a proper generator and a proper discriminator for generating the image of the foreground disease spot and the background image respectively, and synthesizes the generated disease spot and the background image into a new image. Referring to fig. 3, the advantages of this method mainly include the following:
(1) The invention focuses on the information of the disease spots, and uses two generators to generate the shape and texture of the disease spots respectively, so that the characteristics of the original disease spot image can be kept more completely.
(2) Background interference information is weakened, and the interference information of a background image is weakened as far as possible by adjusting loss function parameters aiming at the background information of the lesion.
Drawings
FIG. 1 fine-grained challenge-generating network schematic
FIG. 2 significant plaque region detection and segmentation stage
FIG. 3 image generation stage
FIG. 4A flow chart of comparative experimental data of the present invention
Detailed Description
The present invention will be further described in the following detailed description for better understanding of the objects, technical solutions and advantages of the present invention. The advantages and functions of the present invention will be readily apparent to those skilled in the art from the disclosure herein, but the present invention is not limited in any way. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the invention, which falls within the scope of the invention. The following detailed description of the embodiments of the invention is provided with reference to the accompanying drawings, which can be extended to all plant leaf spot identification applications without conflict.
According to the method for identifying the grape leaf spot based on the fine-grained countermeasure generation network provided by the invention, a main flow is shown in fig. 1, and the specific implementation comprises the following steps.
Step 1: and data acquisition and labeling. The main data part used in the embodiment is derived from a data set disclosed on a network and the main data part is derived from manually collected data, and the common characteristic is that all the data are grape leaf image information of early onset so as to be identified by using the method of the invention, and the embodiment collects a total of 1500 images with 196 pixels and 196 pixels, and the total images are divided into three diseases, and 500 diseases are respectively. And manually marking all the lesions by using an Image label data marking tool on the acquired images so as to be used by a lesion detection algorithm.
Step 2: and (3) performing blade lesion saliency detection by using an improved Faster R-CNN target detection algorithm and dividing a lesion area subgraph. As known to those skilled in the art who use Faster R-CNN target detection, faster R-CNN target detection algorithm can train through marked image data, and finally generates a detection frame adapting to the size of the lesion according to the size of the lesion. In order to more conveniently adapt to a fine-grained image generation algorithm, the invention improves the traditional Fast R-CNN target detection algorithm as follows:
(1) Arranging Faster R-CNN detection frames according to a confidence coefficient descending order, taking the first 50% of detection frames, setting a 64 x 64 pixel frame to be selected, and taking 64 x 64 pixels as a significance detection frame by taking the center point of the detection frame as the center if the detection frame is larger than the frame to be selected; if all confidence detection frames are smaller than the frame to be selected, taking the part with the largest spot density in the frame to be selected as a saliency detection frame.
(2) And if the saliency detection frame exceeds the edge of the blade, adjusting the position of the saliency detection frame to ensure that the saliency detection frame only contains the background information of the lesion blade.
(3) Dividing the lesion area, and dividing a representative leaf lesion saliency subgraph according to the saliency detection frame generated in the step (2), referring to fig. 2.
Step 3: fine-grained contrast generation network image enhancement. The invention provides a grape leaf spot recognition method based on a fine-granularity countermeasure generation network, which is used for carrying out image enhancement operation by inputting a significance detection frame subgraph separated from the previous steps, and aims to expand a training set of a deep learning model, improve generalization capability of the model and increase recognition accuracy, and the main operation steps are as follows with reference to fig. 3 and 1.
(1) Inputting the segmented saliency subgraph, and segmenting a foreground lesion area and a background area by using a mask algorithm. The generation of foreground and background images, respectively, using countermeasure generation can be expressed as:
L=αL spot +βL background
wherein L represents the generated image, L spot Representing the generated lesion image, L background Representing the generated background image, α and β are weight constants.
(2) And a stage of generating the foreground lesion. Foreground lesion generation is independent of background, and two generators G are respectively used mask And G spot 。G mask For generating mask images G spot The features such as color and texture used for generating the lesion, and finally generating the lesion foreground image P, can be expressed as:
P=P f,m +B m
wherein P is f,m =P m ·P f Marked as mask image, B m =(1-P m ) Representing a masking background image. In general, verifying whether an image is good or bad requires a discriminator to discriminate. Unlike conventional countermeasure generation network, the present invention focuses on leaf spot features, such as shape, color, texture, etc. and has also one random noise z and generated leaf spotThe resulting image can be expressed as:
wherein the discriminator D spot The lesion image that is used to control the generation is not affected by background noise.
(3) A background generation stage. Mainly using one generator G in background generation stage b And two discriminators D b And D aux Wherein G is b For the generation of background images, the image generated at this stage is mainly composed of two parts:
L background =L bg_adv +L bg_aux
wherein L is bg_adv Is to generate a loss function, L bg_aux Is an auxiliary background classification loss function. L (L) bg_adv And L bg_aux The respective terms can be expressed as:
by setting L bg_adv The background generated image weakens the texture difference of the surface, the generated image is divided into a plurality of small blocks, and the quality of the generated image is judged by comparing the difference of each small block. L (L) bg_aux The purpose of (a) is to complete the generation of the background by means of the arbiter D aux The foreground image and the background image are distinguished and the generator is trained.
(4) The generated background image and the disease spot image are synthesized into a new subgraph, and 1500 images are generated by this step in the example, and 500 images are generated in each of three categories.
Step 4: in order to verify the effect of the invention, the sub-graph segmented in the step 2 is divided into a training set, a verification set and a test set. Referring to fig. 4, the training set and the verification set divided by the sub-graph divided in the step 2 are mixed with the generated image to be used as a new training set and a verification set to be input into a deep learning classifier for training, in order to ensure the rigor of experimental verification, the invention also adopts a current popular countermeasure generation network as comparison, wherein the experimental environment is as follows:
(1) Firstly, comparing a fine-grained countermeasure generation network with a traditional Fid value (frechet inception distance score), wherein the Fid value can intuitively reflect the similarity between a generated image and an original image, and the calculation formula is as follows:
wherein mu x ,μ g Represents the average of x and g, M x ,M x Representing the covariance matrix of x and g, tr represents the trace of the matrix, and the smaller the value of Fid, the higher the similarity of the generated image to the original image. The Fid values of the method and the usual challenge-generating network of the invention are shown in the following table:
from the above table, it can be seen that the method of the present invention has great advantages over other antagonism generating networks.
(2) In order to verify the advantages of the method in leaf spot recognition, the invention adopts a plurality of classical deep learning models to respectively calculate an original image and the spot recognition accuracy after the enhancement of the antagonism generation network data, and in order to ensure the rigor of experimental results, all the deep learning classification models adopt the same parameters for training and testing, and the specific parameters are shown in the following table:
specifically, the input image size is an image of a unified size after normalization, and the image input in this example is mainly divided into two parts: one part is the original subgraph divided in the step 2, and the other part is the subgraph generated by the pavilion countermeasure generation network, please refer to fig. 4; training a batch to be the number of images that the model can process each time; at the optimization layer, an adadelta optimization function is utilized to minimize a loss function, and meanwhile, the learning rate can be adaptively adjusted, and the initial learning rate is set to be 0.0001; in a deep learning classification model, each layer of convolution is followed by a ReLU activation function, the normalized tensor is solved by a LeakyReLU function, and the advantage of using the LeakyReLU is that: during the back propagation, gradients can also be calculated for the portions of the LeakyReLU activation function input that are less than zero, thus solving the neuron "death" (dying ReLU problem) problem. The loss function is an important function for measuring the difference between the calculated output of the network and the target, and the Cross entropy (Cross-entropy) loss function is adopted to solve the multi-classification problem.
Experiments prove that the following results are obtained:
from the results, the invention has superior performance in the identification accuracy of the grape leaf lesions.
While the specific embodiments of the present invention have been described above, it should be understood that the present invention is not limited to the specific embodiments described above, and those skilled in the art can make various changes and modifications within the scope of the claims without affecting the essential content of the present invention.
Claims (2)
1. A method for identifying grape leaf lesions based on a fine-grained countermeasure generation network, comprising the steps of:
s1: the method comprises the steps of acquiring and marking data, wherein the data of the grape leaf lesions originate from an early-stage grape leaf lesion image on a network, or originate from a leaf lesion image of the early stage grape attack acquired by manual photographing, and the acquired image is used for marking the lesions manually by using an image marking tool;
s2: the improved target detection algorithm FasterR-CNN is utilized to detect and segment the significant areas of the grape leaf lesions, and the improvement method comprises the following steps: arranging FasterR-CNN detection frames according to a confidence coefficient descending order, taking the first 50% of detection frames, setting a 64 x 64 pixel frame to be selected, and taking 64 x 64 pixels as a significance detection frame by taking the center point of the detection frame as the center if the detection frame is larger than the frame to be selected; if all confidence detection frames are smaller than the to-be-selected frames, selecting the to-be-selected frame with the largest spot number as a saliency detection frame; if the saliency detection frame exceeds the edge of the blade, the position of the saliency detection frame is adjusted to ensure that the saliency detection frame only contains the disease spots and the background information of the blade; dividing the region of the lesion, and dividing a representative leaf lesion significance subgraph according to the generated significance detection frame;
s3: image enhancement is carried out on the saliency subgraph after S2 segmentation by using a fine-granularity countermeasure generation network, and the fine-granularity countermeasure generation network image generation part comprises: inputting the segmented saliency subgraph, segmenting a foreground lesion area and a background area by using a mask algorithm, and generating foreground and background images by using countermeasure generation respectively, wherein the foreground and background images are expressed as follows: l=αl spot +βL background Where L represents the generated image, L spot Representing the generated lesion image, L background Representing the generated background image, wherein alpha and beta are weight constants; stage of foreground disease spot generation: the foreground disease spot is generated by two generators, one is used for mask generation, and the other is used for disease spot generation; foreground lesion generation is independent of background, and two generators G are respectively used mask And G spot ;G mask For generating mask images G spot The color and texture features used to generate the lesion, and finally the lesion foreground image P is generated, expressed as: p=p f,m +B m Wherein P is f,m =P m ·P f Marked as mask image, B m =(1-P m ) Representing a mask background image; background generation: the background generation stage uses a generator G b And two discriminators D b And D aux Wherein G is b For the generation of background images, the image generated at this stage consists of two parts: l (L) background =L bg_adv +L bg_aux Wherein L is bg_adv Is to generate a loss function, L bg_aux Is an auxiliary background classification loss function; the background generation stage has the functions of verifying the background generation effect and assisting in classifying the background and losing functions, weakening the difference of the background of the disease spots of the same class by adjusting the losing functions, highlighting the characteristics of the disease spots, and finally synthesizing the generated background image and the disease spot image into a new sub-image;
s4: inputting the original image and the enhanced image into a deep learning classification model for training, and storing the trained model;
s5: and (3) identifying the grape leaf spot by using the trained model stored in the step (S4).
2. The method for identifying grape leaf spot based on fine-grained anti-generation network according to claim 1, wherein the leaf image with 196 x 196 pixels of the input image of the fine-grained anti-generation network outputs a saliency subgraph containing leaf spot information with 64 x 64 pixels.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110488184.0A CN113112498B (en) | 2021-05-06 | 2021-05-06 | Grape leaf spot identification method based on fine-grained countermeasure generation network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110488184.0A CN113112498B (en) | 2021-05-06 | 2021-05-06 | Grape leaf spot identification method based on fine-grained countermeasure generation network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN113112498A CN113112498A (en) | 2021-07-13 |
| CN113112498B true CN113112498B (en) | 2024-01-19 |
Family
ID=76721270
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110488184.0A Active CN113112498B (en) | 2021-05-06 | 2021-05-06 | Grape leaf spot identification method based on fine-grained countermeasure generation network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113112498B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113917087B (en) * | 2021-10-19 | 2023-09-01 | 云南大学 | Method for monitoring occurrence and morbidity degree of plant diseases and application thereof |
| CN114120117B (en) | 2021-11-19 | 2025-08-29 | 杭州睿胜软件有限公司 | Display method, display system and readable storage medium for plant disease diagnosis information |
| CN114445785B (en) * | 2022-04-11 | 2022-06-21 | 广东省农业科学院植物保护研究所 | Internet of things-based litchi insect pest monitoring and early warning method and system and storage medium |
| CN116596879B (en) * | 2023-05-17 | 2025-12-09 | 旬邑县气象局 | Grape downy mildew self-adaptive identification method based on quantiles of boundary samples |
| CN118366044B (en) * | 2024-06-19 | 2024-08-23 | 成都理工大学 | Rice extraction method based on semi-supervised learning generative adversarial network |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109344699A (en) * | 2018-08-22 | 2019-02-15 | 天津科技大学 | Disease identification method of winter jujube based on hierarchical deep convolutional neural network |
| CN109977841A (en) * | 2019-03-20 | 2019-07-05 | 中南大学 | A face recognition method based on adversarial deep learning network |
| CN110188824A (en) * | 2019-05-31 | 2019-08-30 | 重庆大学 | A small sample plant disease identification method and system |
| CN111160135A (en) * | 2019-12-12 | 2020-05-15 | 太原理工大学 | Urine red blood cell lesion identification and statistical method and system based on improved Faster R-cnn |
| CN111369498A (en) * | 2020-02-19 | 2020-07-03 | 浙江大学城市学院 | A Data Augmentation Method for Seedling Growth Viability Evaluation Based on Improved Generative Adversarial Networks |
| CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on masked convolutional neural network |
| CN111985499A (en) * | 2020-07-23 | 2020-11-24 | 东南大学 | A high-precision bridge apparent disease identification method based on computer vision |
| CN112183635A (en) * | 2020-09-29 | 2021-01-05 | 南京农业大学 | Method for realizing segmentation and identification of plant leaf lesions by multi-scale deconvolution network |
| CN112241762A (en) * | 2020-10-19 | 2021-01-19 | 吉林大学 | Fine-grained identification method for pest and disease damage image classification |
| CN112257702A (en) * | 2020-11-12 | 2021-01-22 | 武荣盛 | Crop disease identification method based on incremental learning |
| WO2021017261A1 (en) * | 2019-08-01 | 2021-02-04 | 平安科技(深圳)有限公司 | Recognition model training method and apparatus, image recognition method and apparatus, and device and medium |
| CN112488963A (en) * | 2020-12-18 | 2021-03-12 | 中国科学院合肥物质科学研究院 | Method for enhancing crop disease data |
| KR20210047230A (en) * | 2019-10-21 | 2021-04-29 | 배재대학교 산학협력단 | Fruit tree disease Classification System AND METHOD Using Generative Adversarial Networks |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10423850B2 (en) * | 2017-10-05 | 2019-09-24 | The Climate Corporation | Disease recognition from images having a large field of view |
| US11730387B2 (en) * | 2018-11-02 | 2023-08-22 | University Of Central Florida Research Foundation, Inc. | Method for detection and diagnosis of lung and pancreatic cancers from imaging scans |
| EP3742346A3 (en) * | 2019-05-23 | 2021-06-16 | HTC Corporation | Method for training generative adversarial network (gan), method for generating images by using gan, and computer readable storage medium |
-
2021
- 2021-05-06 CN CN202110488184.0A patent/CN113112498B/en active Active
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109344699A (en) * | 2018-08-22 | 2019-02-15 | 天津科技大学 | Disease identification method of winter jujube based on hierarchical deep convolutional neural network |
| CN109977841A (en) * | 2019-03-20 | 2019-07-05 | 中南大学 | A face recognition method based on adversarial deep learning network |
| CN110188824A (en) * | 2019-05-31 | 2019-08-30 | 重庆大学 | A small sample plant disease identification method and system |
| WO2021017261A1 (en) * | 2019-08-01 | 2021-02-04 | 平安科技(深圳)有限公司 | Recognition model training method and apparatus, image recognition method and apparatus, and device and medium |
| KR20210047230A (en) * | 2019-10-21 | 2021-04-29 | 배재대학교 산학협력단 | Fruit tree disease Classification System AND METHOD Using Generative Adversarial Networks |
| CN111160135A (en) * | 2019-12-12 | 2020-05-15 | 太原理工大学 | Urine red blood cell lesion identification and statistical method and system based on improved Faster R-cnn |
| CN111369498A (en) * | 2020-02-19 | 2020-07-03 | 浙江大学城市学院 | A Data Augmentation Method for Seedling Growth Viability Evaluation Based on Improved Generative Adversarial Networks |
| CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on masked convolutional neural network |
| CN111985499A (en) * | 2020-07-23 | 2020-11-24 | 东南大学 | A high-precision bridge apparent disease identification method based on computer vision |
| CN112183635A (en) * | 2020-09-29 | 2021-01-05 | 南京农业大学 | Method for realizing segmentation and identification of plant leaf lesions by multi-scale deconvolution network |
| CN112241762A (en) * | 2020-10-19 | 2021-01-19 | 吉林大学 | Fine-grained identification method for pest and disease damage image classification |
| CN112257702A (en) * | 2020-11-12 | 2021-01-22 | 武荣盛 | Crop disease identification method based on incremental learning |
| CN112488963A (en) * | 2020-12-18 | 2021-03-12 | 中国科学院合肥物质科学研究院 | Method for enhancing crop disease data |
Non-Patent Citations (9)
| Title |
|---|
| A Data Augmentation Method Based on Generative Adversarial Networks for Grape Leaf Disease Identification;Bin Liu;IEEE Acces;第8卷;102188-102198 * |
| Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks;Shaoqing Ren;IEEE Transactions on Pattern Analysis and Machine Intelligence;第39卷(第6期);1137-114 * |
| FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discover;Krishna Kumar Sing;2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR;1-18 * |
| 刘海东 ; 杨小渝 ; 朱林忠 ; .基于生成对抗网络的乳腺癌病理图像可疑区域标记.科研信息化技术与应用.2017,(第06期),全文. * |
| 医学影像疾病诊断的残差神经网络优化算法研究进展;周涛;霍兵强;陆惠玲;师宏斌;;中国图象图形学报(第10期);全文 * |
| 周涛 ; 霍兵强 ; 陆惠玲 ; 师宏斌 ; .医学影像疾病诊断的残差神经网络优化算法研究进展.中国图象图形学报.2020,(第10期),全文. * |
| 基于生成对抗网络的乳腺癌病理图像可疑区域标记;刘海东;杨小渝;朱林忠;;科研信息化技术与应用(第06期);全文 * |
| 基于神经结构搜索的多种植物叶片病害识别;黄建平;陈镜旭;李克新;李君禹;刘航;;农业工程学报(第16期);全文 * |
| 黄建平 ; 陈镜旭 ; 李克新 ; 李君禹 ; 刘航 ; .基于神经结构搜索的多种植物叶片病害识别.农业工程学报.2020,(第16期),全文. * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113112498A (en) | 2021-07-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113112498B (en) | Grape leaf spot identification method based on fine-grained countermeasure generation network | |
| Khan et al. | An automated system for cucumber leaf diseased spot detection and classification using improved saliency method and deep features selection | |
| Kesav et al. | Efficient and low complex architecture for detection and classification of Brain Tumor using RCNN with Two Channel CNN | |
| Xu et al. | Regional clustering-based spatial preprocessing for hyperspectral unmixing | |
| JP5823270B2 (en) | Image recognition apparatus and method | |
| CN105574063B (en) | Image retrieval method based on visual saliency | |
| CN102103690A (en) | Method for automatically portioning hair area | |
| CN112949634B (en) | A method for detecting bird nests in railway contact network | |
| CN107239759A (en) | A kind of Hi-spatial resolution remote sensing image transfer learning method based on depth characteristic | |
| Ouyang et al. | The research of the strawberry disease identification based on image processing and pattern recognition | |
| CN113222062A (en) | Method, device and computer readable medium for tobacco leaf classification | |
| CN118135325B (en) | Medical image fine granularity classification method based on segmentation large model guidance | |
| CN113657216A (en) | Method for separating tree crown and wood point of tree in point cloud scene based on shape characteristics | |
| Li et al. | Optimized automatic seeded region growing algorithm with application to ROI extraction | |
| CN107622280B (en) | Modular image saliency detection method based on scene classification | |
| CN113298782A (en) | Interpretable kidney tumor identification method and imaging method | |
| CN107067037B (en) | Method for positioning image foreground by using LL C criterion | |
| Li et al. | Incremental learning of infrared vehicle detection method based on SSD | |
| Jiang et al. | An effective multi-classification method for NHL pathological images | |
| CN109271902B (en) | Infrared weak and small target detection method based on time domain empirical mode decomposition under complex background | |
| Zhang et al. | Saliency detection via image sparse representation and color features combination | |
| Ma et al. | Proposing regions from histopathological whole slide image for retrieval using selective search | |
| CN110147840A (en) | The weak structure object fine grit classification method divided based on the unsupervised component of conspicuousness | |
| Paul et al. | Enhanced random forest for mitosis detection | |
| CN103488997B (en) | Hyperspectral image band selection method based on all kinds of important wave band extractions |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |