CN120198940B - Insect classification method and system based on multi-task convolutional neural network - Google Patents
Insect classification method and system based on multi-task convolutional neural networkInfo
- Publication number
- CN120198940B CN120198940B CN202510681199.7A CN202510681199A CN120198940B CN 120198940 B CN120198940 B CN 120198940B CN 202510681199 A CN202510681199 A CN 202510681199A CN 120198940 B CN120198940 B CN 120198940B
- Authority
- CN
- China
- Prior art keywords
- edge
- insect
- index
- image
- pixel points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/422—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an insect classifying method and system based on a multitask convolutional neural network, and relates to the technical field of insect recognition. Extracting edge characteristics by using canny edge detection, calculating edge complexity and circularity to generate an edge characteristic index, and extracting hue, saturation and brightness information from the HSV image to generate image vividness. And splicing the edge characteristic indexes and the image vividness to form comprehensive characteristic vectors, and constructing a characteristic library containing different insect species and life stages thereof. And finally, constructing an insect classification model based on the multitasking convolutional neural network, and inputting the comprehensive feature vector of the insect image to be identified into the model after training to judge the insect type and life stage.
Description
Technical Field
The invention relates to the technical field of insect identification, in particular to an insect classification method and system based on a multitasking convolutional neural network.
Background
The classification of insects has important significance in the fields of biodiversity research, agricultural pest control, ecological environment monitoring and the like. Traditional classifying and classifying methods of insects generally rely on manual observation and identification, are time-consuming and labor-consuming, and are easily affected by human factors, so that the accuracy and consistency of results are low. In recent years, with the development of image processing technology and machine learning technology, an automatic classification method based on images is attracting attention. However, in the prior art, the insect classification method is often limited to a single feature, such as based on color features, texture features or morphological features, and cannot sufficiently express complex information of an insect image. In addition, many classification methods ignore the characteristic changes of insects in different life stages, and the accuracy and the refinement degree of classification are insufficient, so that it is highly desirable to construct an efficient, accurate and applicable insect classification method in different life stages.
In the prior art, publication number CN116343182a discloses an insect species recognition method, a system and an electronic device thereof, an infrared thermal imager arranged in the insect species recognition system is used for collecting a current frame thermal imaging image of a target area according to a first collecting period, processing and recognizing the current frame thermal imaging image, establishing a target search frame, processing the target search frame and marking a dot matrix, calculating the total gray value of each marking point in the marking dot matrix, calculating the actual difference value between the total gray value of each marking point in the marking dot matrix and the total gray value of each sample point in each sample dot matrix of each insect species in a sample database, and outputting a signal related to the insect species judgment result according to the actual difference value.
The main problems of the scheme are that the characteristic extraction only depends on an infrared thermal imaging method to extract the temperature distribution characteristic of insects, the characteristic extraction dimension is single, the identification accuracy is insufficient, the infrared thermal imaging instrument is easily influenced by factors such as ambient temperature, humidity and the like, and the applicability is poor.
The above information disclosed in the background section is only for enhancement of understanding of the background of the disclosure and therefore it may include information that does not form the prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide an insect classifying method and system based on a multitasking convolutional neural network, which are used for solving the problems in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
An insect classifying method based on a multitasking convolutional neural network comprises the following specific steps:
Step 1, collecting insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
step 2, extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
Extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
Step 4, splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the steps 1-3, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
Step 5, constructing a model based on a multitasking convolutional neural network, taking a comprehensive feature vector in an insect feature library as input, taking the type and life stage of insects as labels, and training an insect classification model;
and 6, inputting the comprehensive feature vectors of the insect images to be classified into the trained insect classification model, and outputting insect types and life stages.
Further, the principle on which the third identification image is generated is:
For each pixel point in the first identification image, respectively convolving the pixel point and eight adjacent pixel points around the pixel point with a horizontal direction template and a vertical direction template of a Prewitt operator to generate gray level difference of the pixel point in the horizontal direction and the vertical direction, wherein the gray level difference is formed according to the following formula:
;
;
;
;
Wherein, the A horizontal direction template representing the Prewitt operator,A vertical template representing a Prewitt operator,Representing the horizontal direction difference of the pixel points,Representing the vertical direction difference of the pixel points,Representing coordinates of the pixel points;
according to the gray level difference between the horizontal direction and the vertical direction, generating the gradient amplitude of each pixel point according to the following formula:
;
Wherein, the Representing coordinates asIs used to determine the gradient magnitude of the pixel points of (c),Representing the horizontal direction difference of the pixel points,Representing the vertical direction difference of the pixel points;
Presetting an edge threshold value, and marking the pixel point as an edge pixel point when the gradient amplitude of the pixel point is higher than that of the edge pixel point.
Further, the principle on which the edge feature index is generated is as follows:
the formula from which the edge complexity is generated is:
;
Wherein, the The complexity of the edges is represented by,Representing the number of edge pixels,Representing the area of the area surrounded by the edge pixel points;
the formula according to which the edge circularity is generated is:
;
Wherein, the The degree of edge circularity is indicated,Representing the perimeter of the edge;
The formula according to which the edge feature index is generated is:
;
Wherein, the Representing the edge feature index.
Further, the principle on which the vividness of the image is generated is as follows:
The formulas according to which the hue index, saturation index and brightness index are generated are:
;
;
;
Wherein, the Represents the hue index of the color,Index representing pixel point, and,Indicating the number of pixel points,Represent the firstThe hue of the individual pixel points,The saturation index is indicated as such,Represent the firstThe saturation of the individual pixels is determined,Represents the brightness index of the pixel point,Represent the firstBrightness of the individual pixels;
The formula according to which the vividness of the image is generated is as follows:
;
;
Wherein, the The vividness of the image is represented,A correction function representing the color vividness;
further, the formula according to which the integrated feature vector is generated is:
;
Wherein, the The integrated feature vector is represented by a set of feature vectors,Represents the index of the edge feature,Indicating the vividness of the image.
The invention also provides an insect classifying system based on the multi-task convolutional neural network, which is used for realizing the insect classifying method based on the multi-task convolutional neural network, and specifically comprises the following steps:
the image acquisition module is used for acquiring insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
The edge extraction module is used for extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and the perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
The color extraction module is used for extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
The characteristic library construction module is used for splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the module, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
The model training module is used for constructing a model based on the multi-task convolutional neural network, taking the comprehensive feature vector in the insect feature library as input, taking the type and life stage of the insect as a label, and training the classifying model of the insect in a grading way;
And the judging and outputting module is used for inputting the comprehensive feature vector of the insect image to be classified into the trained insect classifying and classifying model and outputting the insect type and life stage.
Compared with the prior art, the invention has the beneficial effects that:
According to the method, the edge features and the color features of the image are respectively extracted, the more comprehensive feature vectors are constructed, so that the recognition precision of the classification model in different insects and different life stages of the insects is improved, the more abundant insect appearance features are provided, the color features of the image are comprehensively analyzed through extracting the hue, the saturation and the brightness, the hue index, the saturation index and the brightness index are calculated according to the average value of pixel points, the problem of uneven color distribution of the image caused by illumination, shooting angles or other interference factors is effectively solved, and the stability of feature extraction and the applicability of a scheme are enhanced. The vividness of the surface color of the insect is quantified by the vividness of the image, which is helpful for identifying the insect with obvious color difference.
The method also generates the comprehensive feature vector by splicing the edge feature indexes and the image vividness, fully utilizes complementarity among different features, covers insects of different types and life stages, comprehensively reflects the diversity of the insects, is beneficial to improving the generalization capability of the training model, trains the classifying models of the insects based on the multi-task convolutional neural network, can quickly adapt to new data, can finely classify specific insect types and life stages, and ensures the accuracy and the high efficiency of the whole scheme.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a gray scale image of an insect at the same stage of growth according to an embodiment of the present invention;
FIG. 3 is an image of insect HSV at the same stage of growth in an embodiment of the present invention;
FIG. 4 shows gray scale images of insects at different stages of growth according to an embodiment of the present invention;
FIG. 5 is an image of insect HSV at different stages of growth according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a system module according to an embodiment of the invention.
Detailed Description
The present invention will be further described in detail with reference to specific embodiments in order to make the objects, technical solutions and advantages of the present invention more apparent.
It is to be noted that unless otherwise defined, technical or scientific terms used herein should be taken in a general sense as understood by one of ordinary skill in the art to which the present invention belongs. The terms "first," "second," and the like, as used herein, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "up", "down", "left", "right" and the like are used only to indicate a relative positional relationship, and when the absolute position of the object to be described is changed, the relative positional relationship may be changed accordingly.
Examples:
Referring to fig. 1 to 5, the present invention provides a technical solution:
An insect classifying method based on a multitasking convolutional neural network comprises the following specific steps:
Step 1, collecting insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
in this embodiment, the formula according to which the first identification image is generated by performing the graying process is:
;
Wherein, the The gray value of the pixel point is represented,The red channel value representing the pixel point,Representing the green channel value of the pixel point,A blue channel value representing a pixel point;
the second identification image is generated based on the following principle:
For any pixel point in the image, the RGB color space is HSV color space isFirst, the R, G, B values are converted toR, G, B correspond to:
;
;
;
Based onGenerating H, S, V according to the following formula:
;
;
;
wherein H, S, V represents hue, saturation and brightness, respectively.
Step 2, extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
In this embodiment, the principle on which the third identification image is generated is as follows:
For each pixel point in the first identification image, respectively convolving the pixel point and eight adjacent pixel points around the pixel point with a horizontal direction template and a vertical direction template of a Prewitt operator to generate gray level difference of the pixel point in the horizontal direction and the vertical direction, wherein the gray level difference is formed according to the following formula:
;
;
;
;
Wherein, the A horizontal direction template representing the Prewitt operator,A vertical template representing a Prewitt operator,Representing the horizontal direction difference of the pixel points,Representing the vertical direction difference of the pixel points,Representing coordinates of the pixel points;
according to the gray level difference between the horizontal direction and the vertical direction, generating the gradient amplitude of each pixel point according to the following formula:
;
Wherein, the Representing coordinates asIs used to determine the gradient magnitude of the pixel points of (c),Representing the horizontal direction difference of the pixel points,Representing the vertical direction difference of the pixel points;
Presetting an edge threshold value, and marking the pixel point as an edge pixel point when the gradient amplitude of the pixel point is higher than that of the edge pixel point.
The gradient amplitude reflects the change rate of the gray value of an image on a certain pixel point, the higher the gradient amplitude of the pixel point is, the more severe the change of the gray value of the pixel point is, the more likely the pixel point is an edge pixel point, therefore, the edge threshold value is set to be compared with the gradient amplitude, when the gradient amplitude is smaller than the edge threshold value, the change of the gray value of the pixel point is not obvious, the pixel point is not considered as the edge pixel point and can be a smooth area or noise of the image, when the gradient amplitude is larger than the edge threshold value, the change of the gray value of the pixel point is obvious and can be the edge area of the image, the median value of the gray value of the pixel point is selected as the initial gradient amplitude, the extracted edge pixel point is determined according to the initial gradient amplitude, whether the extracted edge pixel point can reflect the contour characteristics of insects is judged, if the edge pixel point is too sparse, the edge threshold value is lowered, and if the edge pixel point is too complex, the edge threshold value is raised.
The principle on which the edge feature index is generated is as follows:
the formula from which the edge complexity is generated is:
;
Wherein, the The complexity of the edges is represented by,Representing the number of edge pixels,Representing the area of the area surrounded by the edge pixel points;
The edge complexity reflects the complexity of the edge in the third identification image, the higher the edge complexity is, the more the appearance structural details of insects are, the shape and structure of the insects are used for distinguishing the insect types with complex shapes and simple shapes, the shapes with different complexity, corresponding to the insects at different stages, are distinguished, the edge complexity is in direct proportion to the number of edge pixel points, the area is inversely proportional to the surrounding area of the edge pixel points, and the more the appearance complex structures, corresponding to the insects with the higher edge complexity, are.
The formula according to which the edge circularity is generated is:
;
Wherein, the The degree of edge circularity is indicated,Representing the perimeter of the edge;
The edge circularity reflects whether the outline shape of the area surrounded by the edge pixel points is close to a circle, and the higher the edge circularity is, the closer the area surrounded by the edge pixel points is to the circle, the closer the corresponding insect shape is to the circle, for example, the beetle or the insect is in the larva and pupa stage.
The formula according to which the edge feature index is generated is:
;
Wherein, the Representing the edge feature index.
The edge complexity is used for measuring the detail degree of the edge, the edge complexity is higher, the edge shape is more complex, the edge circularity reflects the shape regularity of an area surrounded by the edge, the edge circularity is closer to 1, the shape is closer to a circle, the smaller the value is, the shape is more irregular, the edge characteristic index generated based on the edge complexity and the edge circularity reflects the detail degree and the outline shape of the edge, the lower the general circularity is, the higher the edge characteristic index is, the simpler the edge is, the higher the general circularity is, the lower the edge characteristic index is, the edge characteristic index is proportional to the edge complexity, and the inverse proportion to the edge circularity is.
Extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
in this embodiment, the principle on which the vividness of the image is generated is as follows:
The formulas according to which the hue index, saturation index and brightness index are generated are:
;
;
;
Wherein, the Represents the hue index of the color,Index representing pixel point, and,Indicating the number of pixel points,Represent the firstThe hue of the individual pixel points,The saturation index is indicated as such,Represent the firstThe saturation of the individual pixels is determined,Represents the brightness index of the pixel point,Represent the firstBrightness of the individual pixels;
The formula according to which the vividness of the image is generated is as follows:
;
;
Wherein, the The vividness of the image is represented,A correction function representing the color vividness;
The saturation reflects the purity of the color, represents the quantity of gray components in the color, the higher the saturation is, the purer the color is, the more vivid the vision is, the color is gray when the saturation is 0, the color is in the purest state when the saturation is 1, therefore, the influence of the saturation on the vividness of the image is the largest, the vividness of the image is in direct proportion to the saturation, the brightness reflects the brightness of the color, The brightness is shown to be maximum near the median value of brightness 0.5, the brightness is gradually reduced when it becomes high or low, the hue reflects the type of color and does not directly affect saturation, but different hues are different in visual expressive force, and warm colors are more vivid than cold colors, so different weights are set for different hues for adjusting the difference between the warm colors and the cold colors.
Step 4, splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the steps 1-3, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
In this embodiment, the formula according to which the integrated feature vector is generated is:
;
Wherein, the The integrated feature vector is represented by a set of feature vectors,Represents the index of the edge feature,Indicating the vividness of the image.
The comprehensive feature vector reflects morphological features and color features of insects, different insects have differences in shape, size, color and the like, combines complexity of contour edges and uniqueness of color distribution, distinguishes different insect types, has obvious changes in physical features of the insects in different life stages, comprises larvae, pupae and adults, captures morphological changes through edge feature indexes, such as appearance or body shape changes of wings, and uses image vividness to reflect vividness degree of the colors, such as single color of the insects in larva stages and vividness of the colors of the adults, calculates edge feature indexes and image vividness of the insects in known types and life stages respectively, further generates comprehensive feature vectors, and constructs an insect feature library, wherein the insect feature library comprises comprehensive feature vectors of different insects in different life stages.
Step 5, constructing a model based on a multitasking convolutional neural network, taking a comprehensive feature vector in an insect feature library as input, taking the type and life stage of insects as labels, and training an insect classification model;
in this embodiment, the structure of the model constructed based on the multitasking convolutional neural network is:
an input layer comprising 1 neuron for inputting the comprehensive feature vector;
a first hidden layer comprising 64 neurons activated by a ReLU function;
a second hidden layer comprising 32 neurons activated by a ReLU function;
the output layer contains 2 neurons for outputting the insect species and life stages.
And 6, inputting the comprehensive feature vectors of the insect images to be classified into the trained insect classification model, and outputting insect types and life stages.
Referring to fig. 6, the invention further provides an insect classifying system based on the multi-task convolutional neural network, wherein the system is used for implementing the insect classifying method based on the multi-task convolutional neural network, and specifically comprises the following steps:
the image acquisition module is used for acquiring insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
The edge extraction module is used for extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and the perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
The color extraction module is used for extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
The characteristic library construction module is used for splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the module, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
The model training module is used for constructing a model based on the multi-task convolutional neural network, taking the comprehensive feature vector in the insect feature library as input, taking the type and life stage of the insect as a label, and training the classifying model of the insect in a grading way;
And the judging and outputting module is used for inputting the comprehensive feature vector of the insect image to be classified into the trained insect classifying and classifying model and outputting the insect type and life stage.
The above formulas are all formulas with dimensions removed and numerical values calculated, the formulas are formulas with a large amount of data collected for software simulation to obtain the latest real situation, and preset parameters in the formulas are set by those skilled in the art according to the actual situation.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. Those of skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.
Claims (4)
1. The insect classifying method based on the multitasking convolutional neural network is characterized by comprising the following specific steps:
Step 1, collecting insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
step 2, extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
Extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
Step 4, splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the steps 1-3, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
Step 5, constructing a model based on a multitasking convolutional neural network, taking a comprehensive feature vector in an insect feature library as input, taking the type and life stage of insects as labels, and training an insect classification model;
Step 6, inputting the comprehensive feature vector of the insect image to be classified into the trained insect classification model, and outputting the insect variety and life stage;
The principle on which the edge feature index is generated is as follows:
the formula from which the edge complexity is generated is:
wherein C represents the edge complexity, N represents the number of edge pixel points, A represents the area surrounded by the edge pixel points;
the formula according to which the edge circularity is generated is:
wherein R represents edge circularity, L represents edge circumference;
The formula according to which the edge feature index is generated is:
Wherein X represents an edge feature index;
the principle on which the vividness of the image is generated is as follows:
The formulas according to which the hue index, saturation index and brightness index are generated are:
Wherein H represents a hue index, i represents an index of a pixel, and i e (1, 2.. The., M) represents the number of pixels, H i represents a hue of the i-th pixel, S represents a saturation index, S i represents a saturation of the i-th pixel, V represents a brightness index of the pixel, and V i represents a brightness of the i-th pixel;
The formula according to which the vividness of the image is generated is as follows:
K=S·(1-|V-0.5|)·f(H)
where K represents the image vividness and f (H) represents the correction function of the hue vividness.
2. The method for classifying insects by stages based on a multitasking convolutional neural network according to claim 1, wherein in the step 2, the third recognition image is generated based on the following principle:
For each pixel point in the first identification image, respectively convolving the pixel point and eight adjacent pixel points around the pixel point with a horizontal direction template and a vertical direction template of a Prewitt operator to generate gray level difference of the pixel point in the horizontal direction and the vertical direction, wherein the gray level difference is formed according to the following formula:
Wherein, P X represents a horizontal direction template of the Prewitt operator, P Y represents a vertical direction template of the Prewitt operator, G x represents a horizontal direction difference of the pixel points, G y represents a vertical direction difference of the pixel points, and (x, y) represents coordinates of the pixel points;
according to the gray level difference between the horizontal direction and the vertical direction, generating the gradient amplitude of each pixel point according to the following formula:
Wherein, G (x, y) represents the gradient amplitude of the pixel point with coordinates (x, y), G x represents the horizontal direction difference of the pixel point, and G y represents the vertical direction difference of the pixel point;
Presetting an edge threshold value, and marking the pixel point as an edge pixel point when the gradient amplitude of the pixel point is higher than that of the edge pixel point.
3. The method for classifying insects according to claim 1, wherein the formula according to which the integrated feature vector is generated in the step 4 is:
Z=(X,K)
wherein Z represents the comprehensive feature vector, X represents the edge feature index, and K represents the image vividness.
4. An insect classifying system based on a multitasking convolutional neural network, which is characterized in that the system is used for executing the insect classifying method based on the multitasking convolutional neural network as set forth in any one of claims 1-3, and specifically comprises the following steps:
the image acquisition module is used for acquiring insect images to be classified in a grading way, scaling the insect images into 224 multiplied by 224, copying the insect images into two groups which are identical, carrying out graying treatment on one group to generate a first identification image, and converting the images from an RGB color space to an HSV color space by the other group to generate a second identification image;
The edge extraction module is used for extracting edge pixel points of the first identification image based on canny edge detection, generating a third identification image, recording the number of the edge pixel points in the third identification image, the area and the perimeter of an area surrounded by the edge pixel points, generating edge complexity based on the number of the edge pixel points and the area surrounded by the edge pixel points, generating edge circularity based on the perimeter and the area surrounded by the edge pixel points, and generating an edge feature index based on the edge complexity and the edge circularity;
The color extraction module is used for extracting the hue, saturation and brightness of the pixel points in the second identification image, generating a hue index, a saturation index and a brightness index of the second identification image based on the average value of the hue, the saturation and the brightness of all the pixel points, and generating image vividness based on the hue index, the saturation index and the brightness index;
The characteristic library construction module is used for splicing the edge characteristic indexes and the image vividness to generate comprehensive characteristic vectors, calculating the comprehensive characteristic vectors corresponding to different kinds of insects in different life stages according to the method of the module, and constructing an insect characteristic library containing the comprehensive characteristic vectors of different kinds of insects in different life stages;
The model training module is used for constructing a model based on the multi-task convolutional neural network, taking the comprehensive feature vector in the insect feature library as input, taking the type and life stage of the insect as a label, and training the classifying model of the insect in a grading way;
And the judging and outputting module is used for inputting the comprehensive feature vector of the insect image to be classified into the trained insect classifying and classifying model and outputting the insect type and life stage.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510681199.7A CN120198940B (en) | 2025-05-26 | 2025-05-26 | Insect classification method and system based on multi-task convolutional neural network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510681199.7A CN120198940B (en) | 2025-05-26 | 2025-05-26 | Insect classification method and system based on multi-task convolutional neural network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120198940A CN120198940A (en) | 2025-06-24 |
| CN120198940B true CN120198940B (en) | 2025-10-24 |
Family
ID=96062841
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510681199.7A Active CN120198940B (en) | 2025-05-26 | 2025-05-26 | Insect classification method and system based on multi-task convolutional neural network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120198940B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102930249A (en) * | 2012-10-23 | 2013-02-13 | 四川农业大学 | Method for identifying and counting farmland pests based on colors and models |
| CN118840604A (en) * | 2024-07-09 | 2024-10-25 | 宁波海关技术中心 | Method and device for classifying insects in grading mode based on multitasking convolutional neural network |
| CN119414398A (en) * | 2025-01-07 | 2025-02-11 | 陕西省动物研究所 | Multi-scale underwater fish detection system and detection method based on attention module |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7496228B2 (en) * | 2003-06-13 | 2009-02-24 | Landwehr Val R | Method and system for detecting and classifying objects in images, such as insects and other arthropods |
| CN105976354B (en) * | 2016-04-14 | 2019-02-01 | 广州视源电子科技股份有限公司 | Color and gradient based component positioning method and system |
| CN119438046B (en) * | 2024-10-22 | 2025-05-30 | 佳木斯大学 | Parasite egg detection method and system |
-
2025
- 2025-05-26 CN CN202510681199.7A patent/CN120198940B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102930249A (en) * | 2012-10-23 | 2013-02-13 | 四川农业大学 | Method for identifying and counting farmland pests based on colors and models |
| CN118840604A (en) * | 2024-07-09 | 2024-10-25 | 宁波海关技术中心 | Method and device for classifying insects in grading mode based on multitasking convolutional neural network |
| CN119414398A (en) * | 2025-01-07 | 2025-02-11 | 陕西省动物研究所 | Multi-scale underwater fish detection system and detection method based on attention module |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120198940A (en) | 2025-06-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Panchal et al. | Plant diseases detection and classification using machine learning models | |
| CN114821014B (en) | Multi-task target detection and recognition method and device based on multimodal and adversarial learning | |
| Ramesh et al. | Plant disease detection using machine learning | |
| Francis et al. | Identification of leaf diseases in pepper plants using soft computing techniques | |
| CN105930815B (en) | A kind of underwater biological detection method and system | |
| CN106295124B (en) | The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts | |
| Bojamma et al. | A study on the machine learning techniques for automated plant species identification: current trends and challenges | |
| Sabri et al. | Nutrient deficiency detection in maize (Zea mays L.) leaves using image processing | |
| Singh et al. | Performance analysis of CNN models with data augmentation in rice diseases | |
| Narmatha et al. | Skin cancer detection from dermoscopic images using Deep Siamese domain adaptation convolutional Neural Network optimized with Honey Badger Algorithm | |
| CN110516648B (en) | Identification method of ramie plant number based on UAV remote sensing and pattern recognition | |
| Khan et al. | Comparitive study of tree counting algorithms in dense and sparse vegetative regions | |
| CN105825168A (en) | Golden snub-nosed monkey face detection and tracking algorithm based on S-TLD | |
| Sood et al. | Image quality enhancement for Wheat rust diseased images using Histogram equalization technique | |
| Lubis et al. | Classification of tomato leaf disease and combination extraction features using K-NN algorithm | |
| Ray et al. | Guava leaf disease detection using support vector machine (SVM) | |
| Kukana | Hybrid Machine Learning Algorithm-Based Paddy Leave Disease Detection System | |
| PS et al. | Deep learning model to enhance precision agriculture using superpixel | |
| CN109241932B (en) | A thermal infrared human action recognition method based on the phase feature of the motion variance map | |
| CN120198940B (en) | Insect classification method and system based on multi-task convolutional neural network | |
| Cao et al. | Plant leaf segmentation and phenotypic analysis based on fully convolutional neural network | |
| Shi et al. | Deep change feature analysis network for observing changes of land use or natural environment | |
| CN118072374A (en) | Face recognition method for optimizing deep color race characteristics | |
| CN117496353A (en) | Method for distinguishing and locating the stem centers of weeds in rice fields based on a two-stage segmentation model | |
| Alfita et al. | Feature selection in leaf classification techniques with the black widow optimization method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| OL01 | Intention to license declared | ||
| OL01 | Intention to license declared | ||
| EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20250624 Contract record no.: X2025980046611 Denomination of invention: A method and system for insect classification and grading based on a multi-task convolutional neural network Granted publication date: 20251024 License type: Common License Record date: 20251222 |
|
| EE01 | Entry into force of recordation of patent licensing contract |