CN118351100A - Image definition detection and processing method based on deep learning and gradient analysis - Google Patents
Image definition detection and processing method based on deep learning and gradient analysis Download PDFInfo
- Publication number
- CN118351100A CN118351100A CN202410541594.0A CN202410541594A CN118351100A CN 118351100 A CN118351100 A CN 118351100A CN 202410541594 A CN202410541594 A CN 202410541594A CN 118351100 A CN118351100 A CN 118351100A
- Authority
- CN
- China
- Prior art keywords
- image
- model
- gradient
- clear
- clustering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 24
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000001514 detection method Methods 0.000 title claims abstract description 17
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 238000011156 evaluation Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 29
- 230000001537 neural effect Effects 0.000 claims abstract description 16
- 238000004043 dyeing Methods 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000010276 construction Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 47
- 230000006870 function Effects 0.000 claims description 34
- 238000004364 calculation method Methods 0.000 claims description 24
- 238000000605 extraction Methods 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 13
- 238000009826 distribution Methods 0.000 claims description 12
- 239000012535 impurity Substances 0.000 claims description 11
- 238000012795 verification Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 4
- 239000011521 glass Substances 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 230000003213 activating effect Effects 0.000 claims description 3
- 238000013145 classification model Methods 0.000 claims description 3
- 238000004138 cluster model Methods 0.000 claims description 3
- 238000002790 cross-validation Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 238000003745 diagnosis Methods 0.000 abstract description 7
- 238000012216 screening Methods 0.000 abstract description 4
- 238000011160 research Methods 0.000 abstract 1
- 210000004027 cell Anatomy 0.000 description 17
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000010827 pathological analysis Methods 0.000 description 2
- 108010077544 Chromatin Proteins 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000003483 chromatin Anatomy 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000013441 quality evaluation Methods 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image definition detection and processing method based on deep learning and gradient analysis, which comprises the following steps: acquiring and preprocessing an image; information quantity evaluation; judging the dyeing quality; global sharpness evaluation; judging and deciding; evaluating local definition; and setting an ambiguity threshold. According to the invention, through dyeing screening, construction of a convolutional neural network-clustering model and local definition evaluation, a clear data set is gradually screened, the problem of screening qualified clear images required by data processing from a large number of cervical images is solved, accurate judgment and efficient processing of cervical cell image definition are realized, and high-quality image data is provided for medical diagnosis and research.
Description
Technical Field
The invention relates to the field of deep learning and medical intersection, in particular to an image definition detection and processing method based on deep learning and gradient analysis.
Background
The image definition detection and processing has important application value in the fields of medical diagnosis, remote sensing imaging, video monitoring, printing quality control and the like. Especially in medical image analysis, the definition of digital images of cervical cells directly affects the identification and judgment of cell morphology and lesion characteristics by pathologists, thereby relating to early detection and accurate treatment of diseases. However, the conventional sharpness evaluation method often depends on artificial visual judgment or simple mathematical indexes, and has the problems of strong subjectivity, low efficiency, limited precision and the like. With the development of deep learning and computer vision technology, the automatic image definition detection and processing method based on machine learning has obvious advantages, can realize objective, efficient and accurate evaluation of image definition, and provides corresponding processing strategies.
The traditional image definition evaluation usually adopts indexes such as information entropy, gray level co-occurrence matrix, structural similarity and the like, but the robustness of the methods under the conditions of complex background, illumination change, color distortion and the like is insufficient. In recent years, deep learning, especially Convolutional Neural Networks (CNNs), have been widely used to provide new solutions for image sharpness detection. The CNN can automatically extract rich layering characteristics from the original image, comprehensively evaluate the global and local definition of the image, and overcome the limitation of the traditional method. In addition, gradient analysis is often used for accurately quantifying local definition as an effective means for describing the edge intensity and direction of an image, and can further improve the accuracy of image definition detection by combining a deep learning method.
Disclosure of Invention
The invention mainly aims to provide an image definition detection and processing method based on deep learning and gradient analysis, which solves the problems of cervical cell digital images in aspects of information quantity screening, dyeing quality judgment, global and local definition evaluation and impurity detection, improves the degree of automation and diagnosis accuracy of image processing, and is beneficial to improving the efficiency and quality of cervical cell pathology analysis.
In order to solve the technical problems, the invention adopts the following technical scheme: an image definition detection and processing method based on deep learning and gradient analysis comprises the following steps:
s1, acquiring an image: scanning the whole glass slide to obtain a cervical cell digital image, and processing the image into a uniform size;
S2, inputting the preprocessed image information into an information entropy function, calculating the information quantity of each picture, judging that the information quantity is qualified when the information quantity is larger than a preset threshold T, and otherwise, judging that the information quantity is blank pictures or pictures with smaller information quantity;
S3, respectively calculating the sum of the image R, G, B values, and judging whether the image is qualified or not according to the standard dyeing RGB threshold value;
s4, evaluating global definition: constructing a convolutional neural network-clustering model, wherein the convolutional neural network-clustering model comprises a main model and three clustering branch models, the information quantity and the qualified dyed image information are input into the convolutional neural network-clustering model in the steps S2 and S3, the main model calculates the global definition score S of the whole picture, the three clustering branch models divide the picture score into a clear set, a fuzzy set and a pending set, and key thresholds T 1 and T 2 for distinguishing the boundaries of the clear type from the pending type and the pending type from the fuzzy type are extracted;
S5, judging and deciding: judging whether the picture is clear or fuzzy or undetermined by using the score S of each picture through key thresholds T 1 and T 2, and continuously carrying out local definition evaluation on undetermined image information;
S6, evaluating local definition: carrying out gradient calculation by utilizing a Sobel operator, calculating a gradient value of each pixel point in the horizontal and vertical directions, and calculating gradient strength G and gradient direction theta;
S7, setting an ambiguity threshold value: and setting a gradient threshold T 3 for distinguishing clear and fuzzy areas, traversing the image, marking the pixel point coordinates higher than T 3 as local clear areas, and taking the marked undetermined image as a standby set of clear images.
In a preferred embodiment, in step S1, the digital image of cervical cells is unified to a size of 1024×1024 pixels.
In a preferred embodiment, the step S2 includes the following sub-steps:
s21, converting an input color image into a gray image, wherein the specific formula is as follows:
Igray(x,y)=GrayConversion(Icolor(x,y))(1)
Wherein I color is the original color image, I gray is the converted gray image, and GrayConversion is the gray conversion function;
S22, calculating a histogram: calculating a histogram H of the denoised image I smooth, wherein H (k) represents the frequency of occurrence of pixels with gray values of k;
S23, calculating information entropy: according to the gray histogram H, calculating the information entropy E of the image, wherein the specific calculation formula is as follows:
Where L is the number of gray levels, H (k) is the frequency with which pixels with gray values k appear, mxN is the total number of pixels in the image, Representing the probability of occurrence of the gray value k;
S24, judging and recording: comparing the calculated information entropy E (G) with a preset threshold T, and if E (G) is larger than the preset threshold T, considering that the image information quantity is qualified; otherwise, it is determined that the blank pattern or the pattern with a smaller information amount is present.
In a preferred embodiment, in step S4, the main model and the three clustering branch models specifically include:
Main model: the system comprises a feature extraction module formed by a plurality of convolution layers, a full connection layer and a final definition grading output layer; the main model is responsible for extracting features of an input image and outputting a definition score;
Three clustering branches: the convolution feature extraction module of the previous layers is shared with the main model, the learned general image features are utilized, after the feature extraction module is shared, the image scores output by the main model are combined, a lightweight full-connection three-branch clustering layer specific to three-branch clustering tasks is added, the output dimension of the layer is 3, and the three classes are respectively corresponding to clear, fuzzy and undetermined;
Determining a threshold: according to the clustering result, determining thresholds for distinguishing between clear and pending and between pending and fuzzy, trying different threshold combinations by analyzing statistics of clustering centers, clustering boundaries or data points inside the clusters, and selecting an optimal threshold by cross-validation or other evaluation methods.
In a preferred scheme, step S4 further includes a construction process of a convolutional neural network-cluster model:
S31, data preparation: collecting a large number of image samples containing clear, fuzzy and undetermined categories, labeling the samples, and preprocessing in step S1D
S32, model architecture: the main model comprises a plurality of convolution layers, namely a feature extraction module, a full-connection layer and a definition scoring output layer, wherein the last layer of the main model is the full-connection layer, the output of the main model is a single numerical value S, and the S is limited in the range of [0,1] through an activating function Sigmoid;
The three clustering branches and the main model share the feature extraction module, and then a lightweight full-connection layer is connected, the output dimension of the lightweight full-connection layer is 3, the probability distribution P= (P c,Pu,Pf) of the three categories which are clear, fuzzy and undetermined is expressed, and the probability distribution P c+Pu+Pf = 1 is met;
S33, loss function: for the main model, the global sharpness score is measured using binary cross entropy as a loss function, and the specific formula is as follows:
where N is the number of samples, y i is the true label (0 or 1, indicating clear or fuzzy) of the ith sample, S i is the global sharpness score of the model prediction;
For three clustering branches, multi-classification cross entropy is used as a loss function L P to measure the difference between the class probability distribution P and the real labels, and the specific formula is as follows:
Wherein y ij is the true label (0 or 1 or 2) of the ith sample on the jth class, and P ij is the probability that the ith sample of the model prediction belongs to the jth class
The total loss function is: l=w SLS+WPLP (5)
Wherein, the loss weights of the W S、WP main model and the three clustering branch models are W S+WP =1;
S34, optimizing a model: calculating the gradient of the loss function on the model parameters by using a back propagation algorithm, and updating the model parameters according to the gradient by using an Adam optimizer;
S35, training process: dividing the data set into a training set, a verification set and a test set, iterating a training model on the training set, and in each iteration process: forward propagation calculates loss, reverse propagation calculates gradient; updating model parameters: periodically evaluating the performance of the model on a verification set, adjusting super parameters or performing early stopping according to the performance of the verification set, and evaluating the final performance of the model on a test set after training is completed;
S36, model evaluation: for the master model, correlation between the prediction scores and the real labels can be calculated to evaluate model performance;
for three clustering branches, three indexes including accuracy, confusion matrix and F1 score can be calculated to evaluate the classification effect.
In a preferred embodiment, step S5 includes the specific steps of:
If S > T 1: judging that the whole picture is clear, keeping the original picture as a data set without special treatment, and ending the flow;
If S < T 2: judging that the whole picture is fuzzy, recording coordinate information of a fuzzy area, planning to re-shoot, and ending the flow;
If T 2≤S<T1: starting local definition evaluation;
In a preferred embodiment, if S > T 1: judging that the whole picture is clear, putting the whole clear image into a two-class model, judging whether the picture contains impurities, removing the image classified as the impurity, and keeping the original image as a data set;
in a preferred embodiment, in step S6, the specific substeps include:
S41, performing smoothing processing such as Gaussian filtering on the gray level image, and reducing the influence of noise on gradient calculation, wherein the specific expression is as follows:
Ismooth(x,y)=GaussianFilter(Igray,σ)(6)
Wherein I smooth is the smoothed image, gaussianFilter is a gaussian filter function, σ is the standard deviation of the gaussian kernel;
S42, calculating gradient values of each pixel point (x, y) in the image in the horizontal direction and the vertical direction, and carrying out pixel difference by utilizing a Sobel operator:
Gx(x,y)=Ismooth(x+1,y)-Ismooth(x-1,y)(7)
Gy(x,y)=Ismooth(x,y+1)-Ismooth(x,y-1)(8)
Wherein I smooth (x, y) represents a smoothed image of the image at position (x, y);
S43, calculating gradient amplitude G and gradient direction theta by using Euclidean distance and arctangent functions:
Wherein G is the steep degree of the edge of the image at the pixel point, and can reflect the local definition; θ represents the edge direction.
The method provides a cervical cell digital image definition detection and processing method based on deep learning and gradient analysis. Unifying the sizes of the images, preprocessing, screening out the images with sufficient information content through an information entropy function, and simultaneously calculating R, G, B value sum to judge dyeing quality; constructing a convolutional neural network-clustering model to perform global definition evaluation, calculating image scores by a main model, classifying the images into clear, fuzzy and undetermined branches by three clustering branches, and determining a key threshold value; judging the image category according to the score and the threshold value, carrying out local definition evaluation on the image to be determined, calculating the gradient by using a Sobel operator, and setting a gradient threshold value to distinguish clear and fuzzy areas; the method effectively screens out qualified images, accurately evaluates global and local definition, guides the re-shooting or reserving of image decisions, ensures the accuracy and efficiency of cervical cell pathological analysis, and has important significance for improving the quality of medical diagnosis.
The invention has the beneficial effects that:
(1) Overall image quality assessment: the method combines information quantity, dyeing quality, global definition and local definition multidimensional evaluation, and compared with the traditional method, the cervical cell image quality is evaluated more comprehensively and accurately, and a power-assisted pathologist accurately identifies cell morphology and lesions.
(2) Deep learning global sharpness assessment: the convolutional neural network-clustering model is utilized for precise scoring, the advantages of deep learning in feature extraction and pattern recognition are exerted, the limitation of the traditional method on complex image feature recognition is broken through, definition classification is refined through three clustering branches, and fineness and accuracy are improved.
(3) Gradient analysis local sharpness mining: and (3) calculating gradients by applying a Sobel operator to the image to be determined, setting a gradient threshold to identify a local clear region, compensating local details possibly neglected by global evaluation, helping to find valuable local information in the blurred image, providing clues for pathological analysis, and supporting retake or image enhancement decision.
(4) And (3) a systematic automation flow: the method comprises the steps of constructing a complete flow from image acquisition, preprocessing, information quantity and dyeing quality judgment, global and local definition evaluation to impurity detection, closely connecting all links, realizing high automation of cervical cell image quality evaluation and processing, remarkably improving working efficiency and reducing labor cost.
Drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1 is a flow chart of an image sharpness inspection and processing method of the present invention;
FIG. 2 is a diagram of an exemplary image with a large amount of slide image information in an embodiment of the invention;
FIG. 3 is a diagram of an exemplary image with a small amount of slide image information in an embodiment of the invention;
FIG. 4 is a diagram showing an example of an image with good staining quality of a slide image in an embodiment of the invention;
FIG. 5 is an exemplary view of an image with poor chromatin quality of a slide image in an embodiment of the invention;
FIG. 6 is an exemplary view of an image of a slide image focus blur in an embodiment of the present invention;
Fig. 7 is an exemplary view of an image of a slide image focused onto an impurity layer in an embodiment of the invention.
Detailed Description
Example 1
As shown in fig. 1 to 7, an image sharpness detection and processing method based on deep learning and gradient analysis includes the following steps:
S1, acquiring an image: scanning the whole glass slide by LD CytoMatrix to obtain a cervical cell digital image, and processing the image into a uniform size;
S2, inputting the preprocessed image information into an information entropy function, calculating the information quantity of each picture, judging that the information quantity is qualified when the information quantity is larger than a preset threshold T, and otherwise, judging that the information quantity is blank pictures or pictures with smaller information quantity;
s3, respectively calculating the sum of the image R, G, B values, and judging whether the image is qualified or not according to the standard dyeing RGB threshold value; the method ensures that the image dyeing meets the standard, avoids difficult cell structure identification caused by irregular dyeing, and improves the accuracy of image analysis.
S4, evaluating global definition: constructing a convolutional neural network-clustering model, wherein the convolutional neural network-clustering model comprises a main model and three clustering branch models, the information quantity and the qualified dyed image information are input into the convolutional neural network-clustering model in the steps S2 and S3, the main model calculates the global definition score S of the whole picture, the three clustering branch models divide the picture score into a clear set, a fuzzy set and a pending set, and key thresholds T 1 and T 2 for distinguishing the boundaries of the clear type from the pending type and the pending type from the fuzzy type are extracted;
S5, judging and deciding: judging whether the picture is clear or fuzzy or undetermined by using the score S of each picture through key thresholds T 1 and T 2, and continuously carrying out local definition evaluation on undetermined image information;
S6, evaluating local definition: carrying out gradient calculation by utilizing a Sobel operator, calculating a gradient value of each pixel point in the horizontal and vertical directions, and calculating gradient strength G and gradient direction theta;
S7, setting an ambiguity threshold value: and setting a gradient threshold T 3 for distinguishing clear and fuzzy areas, traversing the image, marking the pixel point coordinates higher than T 3 as local clear areas, and taking the marked undetermined image as a standby set of clear images. Gradient threshold setting: the local clear area is marked and used as a spare set of the clear data set, and the clear part is used for continuously providing data value, so that the diagnosis reliability is improved.
The method carries out comprehensive evaluation on cervical cell images through an automatic process. The full-glass image is scanned and processed, the image which is rich in information and qualified in dyeing is screened out, global definition evaluation is carried out on the image by utilizing a convolutional neural network-clustering model, clear, fuzzy and undetermined areas are further finely separated through local definition evaluation, and the undetermined image is marked as a standby set of clear images for later use. The method not only improves the efficiency and accuracy of image evaluation and reduces the complexity of manual intervention, but also provides more reliable basis for cervical cell detection, and is beneficial to improving the accuracy and efficiency of medical diagnosis.
In a preferred embodiment, in step S1, the digital image of cervical cells is unified to a size of 1024×1024 pixels. The cervical cell digital image is adjusted to be the size of 1024 multiplied by 1024 pixels, which is beneficial to the standardization of the subsequent processing and the improvement of the calculation efficiency.
In a preferred embodiment, the step S2 includes the following sub-steps:
s21, converting an input color image into a gray image, wherein the specific formula is as follows:
Igray(x,y)=GrayConversion(Icolor(x,y))(1)
Wherein I color is the original color image, I gray is the converted gray image, and GrayConversion is the gray conversion function;
The input color image is converted into the gray image, so that the image processing flow is simplified, the calculation complexity is reduced, meanwhile, the interference of the color information on the subsequent information entropy calculation is eliminated, and the evaluation is focused on the structural information such as the brightness and the texture of the image. The color image is converted into the gray image by adopting a specific gray conversion function, such as a common linear interpolation method, a common weighted average method and the like, so that the lossless conversion of the color information of the image is ensured, and the important visual characteristics of the image are reserved.
S22, calculating a histogram: calculating a histogram H of the denoised image I smooth, wherein H (k) represents the frequency of occurrence of pixels with gray values of k;
S23, calculating information entropy: according to the gray histogram H, calculating the information entropy E of the image, wherein the specific calculation formula is as follows:
Where L is the number of gray levels, H (k) is the frequency with which pixels with gray values k appear, mxN is the total number of pixels in the image, Representing the probability of occurrence of the gray value k;
S24, judging and recording: comparing the calculated information entropy E (G) with a preset threshold value T=2, and if E (G) is larger than the preset threshold value 2, considering that the image information quantity is qualified; otherwise, it is determined that the blank pattern or the pattern with a smaller information amount is present.
The input color image is converted into the gray image, so that the image processing flow is simplified, the calculation complexity is reduced, meanwhile, the interference of the color information on the subsequent information entropy calculation is eliminated, and the evaluation is focused on the structural information such as the brightness and the texture of the image. The color image is converted into the gray image by adopting a specific gray conversion function, such as a common linear interpolation method, a common weighted average method and the like, so that the lossless conversion of the color information of the image is ensured, and the important visual characteristics of the image are reserved.
By calculating the histogram of the denoised gray level image, the distribution condition of each gray level pixel of the image can be intuitively reflected, and basic data is provided for subsequent information entropy calculation. The histogram can capture the overall brightness distribution and contrast characteristics of the image, helping to identify whether the image has problems of uniform distribution, darkness/brightness, low contrast, etc.
The information entropy is used as an index for measuring the information quantity of the image, and the complexity and uncertainty of the image content can be effectively evaluated. The calculation formula (2) reflects the distribution diversity of the pixel values in the image by taking the number of gray levels of the image, the frequency of occurrence of each gray value and the probability thereof into consideration. The higher the information entropy value is, the larger the information quantity of the image is, and the richer and more complex the content is; conversely, the lower the information entropy value, the more likely that the image is blank, sparse in information, or single in content.
By comparing the calculated information entropy E (G) with a preset threshold T, images with qualified information quantity (E (G) > 2) can be automatically and quantitatively screened, and the images usually have enough details and contents and are suitable for subsequent analysis and processing. And the image with unqualified information (E (G) is less than or equal to 2) is identified as a blank image or a image with less information, and the image has no practical value for subsequent analysis, can be rejected or marked, and avoids wasting calculation resources.
Through gray level conversion, histogram calculation and information entropy evaluation, images with abundant information and diagnostic value are effectively identified, blank images or images with insufficient information are removed, and influence of useless data on subsequent processing and diagnosis is reduced.
In a preferred embodiment, in step S4, the main model and the three clustering branch models specifically include:
Main model: the system comprises a feature extraction module formed by a plurality of convolution layers, a full connection layer and a final definition grading output layer; the main model is responsible for extracting features of an input image and outputting a definition score;
Three clustering branches: the convolution feature extraction module of the previous layers is shared with the main model, the learned general image features are utilized, after the feature extraction module is shared, the image scores output by the main model are combined, a lightweight full-connection three-branch clustering layer specific to three-branch clustering tasks is added, the output dimension of the layer is 3, and the three classes are respectively corresponding to clear, fuzzy and undetermined;
Determining a threshold: according to the clustering result, determining thresholds for distinguishing between clear and pending and between pending and fuzzy, trying different threshold combinations by analyzing statistics of clustering centers, clustering boundaries or data points inside the clusters, and selecting an optimal threshold by cross-validation or other evaluation methods.
And the strong feature extraction capability of the convolutional neural network is utilized, and the clustering technology is combined, so that the precise evaluation of the global definition of the image is realized. The main model provides definition grading, three clustering branches complete image classification, the two branches are combined to realize comprehensive grasp of image definition, and threshold values for distinguishing clear, undetermined and fuzzy categories are determined through analysis of clustering results, so that clear basis is provided for subsequent judgment, and objectivity and consistency of evaluation are enhanced.
In a preferred scheme, step S4 further includes a construction process of a convolutional neural network-cluster model:
S31, data preparation: collecting a large number of image samples containing clear, fuzzy and undetermined categories, labeling the samples, and preprocessing in step S1D
S32, model architecture: the main model comprises a plurality of convolution layers, namely a feature extraction module, a full-connection layer and a definition scoring output layer, wherein the last layer of the main model is the full-connection layer, the output of the main model is a single numerical value S, and the S is limited in the range of [0,1] through an activating function Sigmoid;
The three clustering branches and the main model share the feature extraction module, and then a lightweight full-connection layer is connected, the output dimension of the lightweight full-connection layer is 3, the probability distribution P= (P c,Pu,Pf) of the three categories which are clear, fuzzy and undetermined is expressed, and the probability distribution P c+Pu+Pf = 1 is met;
S33, loss function: for the main model, the global sharpness score is measured using binary cross entropy as a loss function, and the specific formula is as follows:
where N is the number of samples, y i is the true label (0 or 1, indicating clear or fuzzy) of the ith sample, S i is the global sharpness score of the model prediction;
For three clustering branches, multi-classification cross entropy is used as a loss function L P to measure the difference between the class probability distribution P and the real labels, and the specific formula is as follows:
Where y ij is the true label (0 or 1 or 2) of the ith sample on the jth class, and P ij is the probability that the model-predicted ith sample belongs to the jth class;
the total loss function is: l=w SLS+WPLP (5)
Wherein, the loss weights of the W S、WP main model and the three clustering branch models are W S+WP =1;
S34, optimizing a model: calculating the gradient of the loss function on the model parameters by using a back propagation algorithm, and updating the model parameters according to the gradient by using an Adam optimizer;
S35, training process: dividing the data set into a training set, a verification set and a test set, iterating a training model on the training set, and in each iteration process: forward propagation calculates loss, reverse propagation calculates gradient; updating model parameters: periodically evaluating the performance of the model on a verification set, adjusting super parameters or performing early stopping according to the performance of the verification set, and evaluating the final performance of the model on a test set after training is completed;
S36, model evaluation: for the master model, correlation between the prediction scores and the real labels can be calculated to evaluate model performance;
for three clustering branches, three indexes including accuracy, confusion matrix and F1 score can be calculated to evaluate the classification effect.
In a preferred embodiment, step S5 includes the specific steps of:
If S > T 1: judging that the whole picture is clear, keeping the original picture as a data set without special treatment, and ending the flow;
If S < T 2: judging that the whole picture is fuzzy, recording coordinate information of a fuzzy area, planning to re-shoot, and ending the flow;
If T 2≤S<T1: local sharpness evaluation is initiated.
The process can quickly judge whether the image is wholly clear, fuzzy or in a pending state by comparing the global definition score with a preset key threshold value, so that the image is effectively divided into a clear and fuzzy image needing to be re-shot and a pending image needing to be further evaluated. For the image judged to be blurred, the coordinate information of the blurred area can be recorded and the re-shooting can be planned, so that the data quality is improved, the problem image in subsequent processing is reduced, the waste of calculation resources is reduced, for the clear image, the clear image does not need to be further processed, the clear image can be directly reserved for being used as a data set, and the resources and time for unnecessarily processing the part of the image are saved; in addition, the image processing efficiency is improved, the local definition evaluation is continuously carried out on the image to be determined, instead of executing the step on all the images, unnecessary calculation on the clear image can be reduced, and the processing efficiency is improved.
In a preferred embodiment, if S > T 1: and judging that the whole picture is clear, putting the whole clear image into a two-classification model, judging whether the picture contains impurities, removing the images classified as the impurities, and keeping the original image as a data set.
For sharp images, there are also images containing impurities, which are not required for data processing, and the above process again uses a classification model to remove this part of the image.
In a preferred embodiment, in step S6, the specific substeps include:
S41, performing smoothing processing such as Gaussian filtering on the gray level image, and reducing the influence of noise on gradient calculation, wherein the specific expression is as follows:
Ismooth(x,y)=GaussianFilter(Igray,σ)(6)
Wherein I smooth is the smoothed image, gaussianFilter is a gaussian filter function, σ is the standard deviation of the gaussian kernel;
S42, calculating gradient values of each pixel point (x, y) in the image in the horizontal direction and the vertical direction, and carrying out pixel difference by utilizing a Sobel operator:
Gx(x,y)=Ismooth(x+1,y)-Ismooth(x-1,y)(7)
Gy(x,y)=Ismooth(x,y+1)-Ismooth(x,y-1)(8)
Wherein I smooth (x, y) represents a smoothed image of the image at position (x, y);
S43, calculating gradient amplitude G and gradient direction theta by using Euclidean distance and arctangent functions:
Wherein G is the steep degree of the edge of the image at the pixel point, and can reflect the local definition; θ represents the edge direction.
The method comprises the steps of image smoothing, gradient calculation, gradient amplitude and direction calculation and local definition evaluation, wherein image noise is reduced through a Gaussian filter, a clearer image foundation is provided for gradient calculation, interference of noise on gradient calculation is avoided, and accuracy of local definition evaluation is improved; the Sobel operator is used for calculating the gradient value of the pixel point in the horizontal and vertical directions, so that the edge information in the image can be effectively detected, and a basis is provided for subsequent definition judgment; obtaining gradient amplitude and direction by using Euclidean distance and arctangent function, wherein the amplitude represents the steep degree of the edge and is directly related to local definition; the direction information is helpful for identifying the directionality of the edge, and has important significance for understanding the content and structure of the image; by setting a gradient threshold to distinguish clear areas from fuzzy areas, pixels higher than the threshold are considered as local clear, and the method can accurately separate the clear areas from the image to be determined, so that the usability of the image and the accuracy of subsequent analysis are improved. In addition, for the area judged to be fuzzy, coordinate information can be recorded and re-shooting can be planned, so that not only is the image quality improved, but also the time and resources for invalidating low-quality images are saved, and a standby set is established, and the marked undetermined images are used as the standby set of clear images, so that additional high-quality data resources are provided for possible subsequent model training or analysis.
Example 2
In order to verify the effectiveness of the method, the patent tests the independent pictures, namely 1441 pictures, 28407 pictures with blurred images and 500 impurity pictures. Our algorithm can reach 99.1% accuracy. The scanning samples are tested, 2000 samples are scanned in total, 7 samples fail, and the accuracy rate can reach 99.6%.
The above embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and the scope of the present invention should be defined by the claims, including the equivalents of the technical features in the claims. I.e., equivalent replacement modifications within the scope of this invention are also within the scope of the invention.
Claims (8)
1. The image definition detection and processing method based on deep learning and gradient analysis is characterized by comprising the following steps of: the method comprises the following steps:
S1, acquiring and preprocessing an image: scanning the whole glass slide to obtain a cervical cell digital image, and processing the image into a uniform size;
s2, information quantity evaluation: inputting the preprocessed image information into an information entropy function, calculating the information quantity of each picture, judging that the information quantity is qualified when the information quantity is larger than a preset threshold value T, and otherwise, judging that the information quantity is blank pictures or pictures with smaller information quantity;
s3, judging dyeing quality: respectively calculating the sum of the image R, G, B values, and judging whether the image is dyed or not according to the standard dyeing RGB threshold value;
s4, evaluating global definition: constructing a convolutional neural network-clustering model, wherein the convolutional neural network-clustering model comprises a main model and three clustering branch models, the information quantity and the qualified dyed image information are input into the convolutional neural network-clustering model in the steps S2 and S3, the main model calculates the global definition score S of the whole picture, the three clustering branch models divide the picture score into a clear set, a fuzzy set and a pending set, and key thresholds T 1 and T 2 for distinguishing the boundaries of the clear type from the pending type and the pending type from the fuzzy type are extracted;
S5, judging and deciding: judging whether the picture is clear or fuzzy or undetermined by using the score S of each picture through key thresholds T 1 and T 2, and continuously carrying out local definition evaluation on undetermined image information;
S6, evaluating local definition: carrying out gradient calculation by utilizing a Sobel operator, calculating a gradient value of each pixel point in the horizontal and vertical directions, and calculating gradient strength G and gradient direction theta;
S7, setting an ambiguity threshold value: and setting a gradient threshold T 3 for distinguishing clear and fuzzy areas, traversing the image, marking the pixel point coordinates higher than T 3 as local clear areas, and taking the marked undetermined image as a standby set of clear images.
2. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: in the step S1, the digital image of cervical cells is unified to a size of 1024×1024 pixels.
3. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: the step S2 includes the following sub-steps:
s21, converting an input color image into a gray image, wherein the specific formula is as follows:
Igray(x,y)=GrayConversion(Icolor(x,y))(1)
Wherein I color is the original color image, I gray is the converted gray image, and GrayConversion is the gray conversion function;
S22, calculating a histogram: calculating a histogram H of the denoised image I smooth, wherein H (k) represents the frequency of occurrence of pixels with gray values of k;
S23, calculating information entropy: according to the gray histogram H, calculating the information entropy E of the image, wherein the specific calculation formula is as follows:
Where L is the number of gray levels, H (k) is the frequency with which pixels with gray values k appear, mxN is the total number of pixels in the image, Representing the probability of occurrence of the gray value k;
S24, judging and recording: comparing the calculated information entropy E (G) with a preset threshold T, and if E (G) is larger than the preset threshold T, considering that the image information quantity is qualified; otherwise, it is determined that the blank pattern or the pattern with a smaller information amount is present.
4. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: in the step S4, the main model and the three clustering branch models specifically include:
Main model: the system comprises a feature extraction module formed by a plurality of convolution layers, a full connection layer and a final definition grading output layer; the main model is responsible for extracting features of an input image and outputting a definition score;
Three clustering branches: the convolution feature extraction module of the previous layers is shared with the main model, the learned general image features are utilized, after the feature extraction module is shared, the image scores output by the main model are combined, a lightweight full-connection three-branch clustering layer specific to three-branch clustering tasks is added, the output dimension of the layer is 3, and the three classes are respectively corresponding to clear, fuzzy and undetermined;
Determining a threshold: according to the clustering result, determining thresholds for distinguishing between clear and pending and between pending and fuzzy, trying different threshold combinations by analyzing statistics of clustering centers, clustering boundaries or data points inside the clusters, and selecting an optimal threshold by cross-validation or other evaluation methods.
5. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: the step S4 further comprises a construction process of a convolutional neural network-cluster model:
S31, data preparation: collecting a large number of image samples containing clear, fuzzy and undetermined categories, labeling the samples, and preprocessing in step S1D
S32, model architecture: the main model comprises a plurality of convolution layers, namely a feature extraction module, a full-connection layer and a definition scoring output layer, wherein the last layer of the main model is the full-connection layer, the output of the main model is a single numerical value S, and the S is limited in the range of [0,1] through an activating function Sigmoid;
The three clustering branches and the main model share the feature extraction module, and then a lightweight full-connection layer is connected, the output dimension of the lightweight full-connection layer is 3, the probability distribution P= (P c,Pu,Pf) of the three categories which are clear, fuzzy and undetermined is expressed, and the probability distribution P c+Pu+Pf = 1 is met;
S33, loss function: for the main model, the global sharpness score is measured using binary cross entropy as a loss function, and the specific formula is as follows:
where N is the number of samples, y i is the true label (0 or 1, indicating clear or fuzzy) of the ith sample, S i is the global sharpness score of the model prediction;
For three clustering branches, multi-classification cross entropy is used as a loss function L P to measure the difference between the class probability distribution P and the real labels, and the specific formula is as follows:
Wherein y ij is the true label (0 or 1 or 2) of the ith sample on the jth class, and P ij is the probability that the ith sample of the model prediction belongs to the jth class
The total loss function is: l=w SLS+WPLP (5) where W S、WP is the loss weight of the main model and the three clustered branch models, and W S+WP =1;
S34, optimizing a model: calculating the gradient of the loss function on the model parameters by using a back propagation algorithm, and updating the model parameters according to the gradient by using an Adam optimizer;
S35, training process: dividing the data set into a training set, a verification set and a test set, iterating a training model on the training set, and in each iteration process: forward propagation calculates loss, reverse propagation calculates gradient; updating model parameters: periodically evaluating the performance of the model on a verification set, adjusting super parameters or performing early stopping according to the performance of the verification set, and evaluating the final performance of the model on a test set after training is completed;
S36, model evaluation: for the master model, correlation between the prediction scores and the real labels can be calculated to evaluate model performance;
for three clustering branches, three indexes including accuracy, confusion matrix and F1 score can be calculated to evaluate the classification effect.
6. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: the specific process in step S5 includes:
If S > T 1: judging that the whole picture is clear, keeping the original picture as a data set without special treatment, and ending the flow;
If S < T 2: judging that the whole picture is fuzzy, recording coordinate information of a fuzzy area, planning to re-shoot, and ending the flow;
If T 2≤S<T1: local sharpness evaluation is initiated.
7. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 6, wherein the method comprises the following steps: if S > T 1: and judging that the whole picture is clear, putting the whole clear image into a two-classification model, judging whether the picture contains impurities, removing the images classified as the impurities, and keeping the original image as a data set.
8. The method for detecting and processing image definition based on deep learning and gradient analysis according to claim 1, wherein the method comprises the following steps: in the step S6, specific substeps include:
S41, performing smoothing processing such as Gaussian filtering on the gray level image, and reducing the influence of noise on gradient calculation, wherein the specific expression is as follows:
Ismooth(x,y)=GaussianFilter(Igray,σ)(6)
Wherein I smooth is the smoothed image, gaussianFilter is a gaussian filter function, σ is the standard deviation of the gaussian kernel;
S42, calculating gradient values of each pixel point (x, y) in the image in the horizontal direction and the vertical direction, and carrying out pixel difference by utilizing a Sobel operator:
Gx(x,y)=Ismooth(x+1,y)-Ismooth(x-1,y)(7)Gy(x,y)=Ismooth(x,y+1)-Ismooth(x,y-1)(8) Wherein I smooth (x, y) represents a smoothed image of the image at position (x, y);
S43, calculating gradient amplitude G and gradient direction theta by using Euclidean distance and arctangent functions:
Wherein G is the steep degree of the edge of the image at the pixel point, and can reflect the local definition; θ represents the edge direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410541594.0A CN118351100A (en) | 2024-04-30 | 2024-04-30 | Image definition detection and processing method based on deep learning and gradient analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410541594.0A CN118351100A (en) | 2024-04-30 | 2024-04-30 | Image definition detection and processing method based on deep learning and gradient analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118351100A true CN118351100A (en) | 2024-07-16 |
Family
ID=91820685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410541594.0A Pending CN118351100A (en) | 2024-04-30 | 2024-04-30 | Image definition detection and processing method based on deep learning and gradient analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118351100A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118822916A (en) * | 2024-09-19 | 2024-10-22 | 上海宇度医学科技股份有限公司 | A method and system for real-time image correction of hysteroscope |
CN119228799A (en) * | 2024-12-02 | 2024-12-31 | 江苏京芯光电科技有限公司 | Image data quality detection method and system |
CN119313660A (en) * | 2024-12-16 | 2025-01-14 | 江苏格罗瑞节能科技有限公司 | Dynamic detection and feature extraction method of textile spindle speed |
CN119723307A (en) * | 2025-03-04 | 2025-03-28 | 四川云瞻信息技术有限公司 | Automatic screening method of advertising promotion AI pictures based on online network platform |
-
2024
- 2024-04-30 CN CN202410541594.0A patent/CN118351100A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118822916A (en) * | 2024-09-19 | 2024-10-22 | 上海宇度医学科技股份有限公司 | A method and system for real-time image correction of hysteroscope |
CN119228799A (en) * | 2024-12-02 | 2024-12-31 | 江苏京芯光电科技有限公司 | Image data quality detection method and system |
CN119313660A (en) * | 2024-12-16 | 2025-01-14 | 江苏格罗瑞节能科技有限公司 | Dynamic detection and feature extraction method of textile spindle speed |
CN119723307A (en) * | 2025-03-04 | 2025-03-28 | 四川云瞻信息技术有限公司 | Automatic screening method of advertising promotion AI pictures based on online network platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110097034B (en) | Intelligent face health degree identification and evaluation method | |
WO2021139258A1 (en) | Image recognition based cell recognition and counting method and apparatus, and computer device | |
CN113435460B (en) | A recognition method for bright crystal granular limestone images | |
CN110376198B (en) | Cervical liquid-based cell slice quality detection system | |
CN118351100A (en) | Image definition detection and processing method based on deep learning and gradient analysis | |
CN111985536A (en) | Gastroscope pathological image classification method based on weak supervised learning | |
CN108921201B (en) | Dam defect identification and classification method based on feature combination and CNN | |
CN111915572B (en) | Adaptive gear pitting quantitative detection system and method based on deep learning | |
CN112380900A (en) | Deep learning-based cervical fluid-based cell digital image classification method and system | |
CN108288506A (en) | A kind of cancer pathology aided diagnosis method based on artificial intelligence technology | |
CN115909006B (en) | Mammary tissue image classification method and system based on convolution transducer | |
CN111968147B (en) | Breast cancer pathological image comprehensive analysis system based on key point detection | |
CN117576687B (en) | Cervical cancer cytology screening system and method based on image analysis | |
CN113724842B (en) | Cervical tissue pathology auxiliary diagnosis method based on attention mechanism | |
CN113470041B (en) | Immunohistochemical cell image cell nucleus segmentation and counting method and system | |
CN118429347A (en) | Textile defect flaw detection system | |
CN114820510A (en) | Cytopathology image quality evaluation method | |
CN119559637A (en) | Immunoblotting data classification system based on feature engineering | |
CN117496276B (en) | Lung cancer cell morphology analysis and identification method and computer readable storage medium | |
CN118967474A (en) | A defect image enhancement method based on traditional data enhancement | |
CN116245850B (en) | A fat content analysis device and method for liver pathological slice images | |
CN119559564B (en) | Power transmission line icing analysis method and system based on image recognition | |
CN119228796B (en) | Method and device for detecting appearance defects of electroplated layer of power pin product and related product | |
CN119355003A (en) | An automatic determination system for the surface quality of hot-rolled strip products | |
Zhao et al. | Development of an enhanced hybrid attention YOLOv8s small object detection method for phenotypic analysis of root nodules |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |