CN111583199A - Sample image annotation method and device, computer equipment and storage medium - Google Patents
Sample image annotation method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN111583199A CN111583199A CN202010335044.5A CN202010335044A CN111583199A CN 111583199 A CN111583199 A CN 111583199A CN 202010335044 A CN202010335044 A CN 202010335044A CN 111583199 A CN111583199 A CN 111583199A
- Authority
- CN
- China
- Prior art keywords
- image
- annotation
- labeling
- segmentation
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a sample image annotation method, a sample image annotation device, computer equipment and a storage medium. The method comprises the following steps: inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result; judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled; if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model. The method can greatly improve the quality of the labeled data, and further improve the performance of the image segmentation model obtained by training.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for labeling a sample image, a computer device, and a storage medium.
Background
With the rapid development of deep learning, a great deal of research results have appeared on detection, classification and segmentation of medical images by using deep learning. Image segmentation (generally, semantic segmentation) refers to classification at the image pixel level, and determines the edge position of a focal region according to semantic information of an image so as to segment the focal region. Generally, before image segmentation is performed by using a deep learning algorithm (or model), a large number of finely labeled samples are required as learning targets of the deep learning algorithm (or model) to train the algorithm (or model). However, image segmentation and labeling are relatively complex tasks, each pixel point in a focus region needs to be marked, and if all the pixels are manually labeled, time and labor are wasted.
In the traditional technology, a doctor usually carries out data annotation on a small amount of samples, and a model is trained firstly based on the annotation of the small amount of samples; and testing a part of samples based on the trained model, marking and modifying the test result of the trained model by a doctor, and training the model again through the modified marking data.
However, in the conventional technology, the labeling data for training the model are still artificially determined, and the quality of the labeling data is affected by the variance of the artificial labeling quality, so that the performance of the trained model is affected.
Disclosure of Invention
Based on this, it is necessary to provide a sample image annotation method, apparatus, computer device and storage medium for solving the problem of poor quality of sample annotation data in the conventional technology.
A method of annotating a sample image, the method comprising:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
In one embodiment, the image annotation standard comprises different annotation quality indexes and standard quantized values corresponding to the annotation quality indexes; the above-mentioned storing sample image and image annotation result to be annotated to the image annotation storehouse according to image annotation result and preset image annotation standard includes:
quantizing the image labeling result under different labeling quality indexes to obtain target quantization values under different labeling quality indexes;
and if at least one of the target quantization values under different labeling quality indexes is larger than the corresponding standard quantization value, storing the sample image to be labeled and the image labeling result into an image labeling library.
In one embodiment, the annotation quality index comprises an image annotation similarity index and/or an image connected domain index; the quantifying the image labeling result under different labeling quality indexes to obtain the target quantification values under different labeling quality indexes comprises the following steps:
calculating the similarity between the image annotation result and the image annotation gold standard;
determining a target quantization value of the image annotation result under the image annotation similarity index based on the similarity and the similarity threshold; and/or the presence of a gas in the gas,
determining a target connected domain based on the image annotation result, and determining the difference between the target connected domain and the connected domain gold standard;
and determining a target quantization value of the image annotation result under the index of the image connected domain according to the difference and the difference threshold.
In one embodiment, the method further includes:
if the training process of segmenting the annotation model is not executed on the sample image to be annotated, acquiring an image annotation result obtained after the image annotation result is modified by a user;
and executing a training process on the segmentation and annotation model and changing the training identifier of the sample image to be annotated based on the sample image to be annotated and the modified image annotation result.
In one embodiment, before the sample image to be labeled is input into the segmentation labeling model to obtain the image labeling result, the method further includes:
judging whether a segmentation labeling model executing a training process exists at present;
if so, inputting the sample image to be labeled into the segmentation labeling model to obtain an image labeling result;
if not, acquiring a reference annotation result of the sample image to be annotated by the user, and executing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; and changing the training identification of the sample image to be marked.
In one embodiment, the method further includes:
and when the number of the sample images in the image annotation library reaches a preset threshold value, training the image segmentation model by adopting all the sample images in the image annotation library and the image annotation result corresponding to each sample image.
In one embodiment, the training identifier of the sample image to be labeled comprises 0 and 1; 0 represents the training process that the sample image to be labeled is not subjected to the segmentation labeling model; 1, representing a training process of executing an over-segmentation labeling model on a sample image to be labeled.
A sample image annotation apparatus, the apparatus comprising:
the segmentation and annotation module is used for inputting the sample image to be annotated into the segmentation and annotation model to obtain an image annotation result;
the judging module is used for judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
the storage module is used for storing the sample image to be annotated and the image annotation result to be annotated to an image annotation library according to the image annotation result and a preset image annotation standard when the training process of the segmentation annotation model is executed on the sample image to be annotated; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
According to the sample image annotation method, the sample image annotation device, the computer equipment and the storage medium, the sample image to be annotated can be firstly input into the segmentation annotation model, so that an image annotation result is obtained; and then judging whether the sample image to be labeled executes the training process of the segmentation labeling model or not based on the training identification of the sample image to be labeled, if so, storing the sample image to be labeled and the image labeling result to an image labeling library according to the image labeling result and a preset image labeling standard, and training the image segmentation model. It can be known that the training data finally used for training the image segmentation model are all obtained from the image annotation library, and the image annotation result in the image annotation library is output by the segmentation annotation model and meets the image annotation standard, so that the problems of the difference of annotations among different marking personnel, the discontinuity and unsmooth boundary between the upper and lower images during artificial annotation and the like can be avoided. Therefore, the method of the embodiment can greatly improve the quality of the labeling data, and further improve the performance of the image segmentation model obtained by training.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flowchart illustrating a method for annotating a sample image according to an embodiment;
FIG. 2a is a schematic representation of a CT image lung nodule mask in one embodiment;
FIG. 2b is a schematic diagram illustrating an exemplary labeling of a new coronary pneumonia region in the CT image;
FIG. 2c is a diagram illustrating the labeling results of different doctors for the same lesion area according to an embodiment;
FIG. 2d is a comparison of the human annotation result and the image annotation result output by the segmentation annotation model in one embodiment;
FIG. 3 is a flowchart illustrating a method for annotating a sample image according to another embodiment;
FIG. 4 is a flowchart illustrating a method for labeling a sample image according to another embodiment;
FIG. 5 is a flowchart illustrating a method for labeling a sample image according to another embodiment;
FIG. 6 is a flowchart illustrating a method for labeling a sample image according to another embodiment;
FIG. 7 is a block diagram of a sample image annotation device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The sample image annotation method provided by the embodiment of the application can be applied to computer equipment shown in fig. 1. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a communication interface, a display screen and an input means. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. Optionally, the computer device may be a Personal Computer (PC), a personal digital assistant, other terminal devices such as a tablet computer (PAD), a mobile phone, and the like, and may also be a cloud or a remote server, where a specific form of the computer device is not limited in this embodiment of the application.
In an embodiment, as shown in fig. 2, a sample image annotation method is provided, which is described by taking an example that the method is applied to the computer device in fig. 1, and this embodiment relates to a specific process in which the computer device performs image annotation on a sample image to be annotated and obtains an image annotation library, and the method includes the following steps:
s101, inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result.
The sample image to be labeled is a sample image required for training the image segmentation model, and in the training process of the image segmentation model, labeling data of the sample image is also required to be used as a training gold standard (which can also be understood as a learning target) of the image segmentation model. Optionally, the image segmentation model may be a V-Net model, a U-Net model, or other neural network models. Taking the lung nodule screening as an example, for a CT image of the chest of a patient, a doctor needs to mark a MASK (MASK) of a nodule, and the schematic diagram of the MASK can be shown in fig. 2a, so as to train an image segmentation model according to the CT image and the marked MASK. For more complicated lesions (such as new coronary pneumonia), doctors need to judge and mark pneumonia areas on each slice in the CT images according to rich pathological knowledge, and the marking schematic diagram of pneumonia data can be shown in FIG. 2b, which undoubtedly has higher requirements on medical professional knowledge of doctors; moreover, the quality of artificial labeling of different doctors is also often uneven (different labeling results of different doctors on the same lesion area can refer to the schematic diagram shown in fig. 2 c), so that the quality of the obtained labeling data is not uniform, and the performance of the trained image segmentation model is also affected.
In this embodiment, the annotation result of the sample image to be annotated is obtained by using a segmentation annotation model, optionally, the network structure of the segmentation annotation model may be the same as or different from the network structure of the image segmentation model to be trained, which is not limited in this embodiment; the current segmentation tagging model may be a model that has undergone a certain degree of training process. Specifically, the computer device inputs the sample image to be labeled into the segmentation labeling model, so as to obtain a corresponding image labeling result, where the image labeling result may be a mask of a focus region in the sample image to be labeled.
S102, judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled.
Specifically, each sample image to be labeled may further carry a training identifier, where the training identifier is used to represent whether the sample image to be labeled has performed the training process of the segmentation labeling model. Optionally, the training identifier may include 0 and 1, where 0 represents that the sample image to be labeled does not perform the training process of the segmentation labeling model, and 1 represents that the sample image to be labeled performs the training process of the segmentation labeling model. Therefore, the computer device can directly judge whether the training process of the segmentation labeling model is executed on the sample image to be labeled according to the training identification of the sample image to be labeled.
S103, if yes, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
Specifically, if the to-be-labeled sample image is subjected to a training process of segmenting the labeling model, the segmentation labeling model already learns the image characteristics of the to-be-labeled sample image, and the obtained image labeling result has a certain degree of accuracy. The computer equipment can judge whether the image annotation result meets the image annotation standard (can be understood as whether the image annotation result is qualified) according to the image annotation result and a preset image annotation standard, and can store the sample image to be annotated and the image annotation result into an image annotation library for training an image segmentation model if the image annotation result meets the image annotation standard; if not, the image annotation library is not required to be added, and the image annotation library can be eliminated or used for the training process of segmenting the annotation model again after the image annotation result is modified by a doctor with abundant experience. The image annotation result is obtained by segmenting the annotation model, manual intervention is not needed, the annotation difference among different marking personnel can be avoided, and the segmentation annotation model can be used for processing to avoid the problems of discontinuity, unsmooth boundary and the like between upper and lower images during manual annotation. A comparison graph of the human annotation result and the image annotation result output by the segmentation annotation model can be seen in fig. 2 d.
In addition, the above-mentioned image annotation standard is used for characterizing the quality of the image annotation result, optionally including but not limited to smoothness of the annotation boundary, accuracy of the annotation result, and the like. Optionally, when the number of the sample images in the image annotation library reaches a preset threshold (for example, the number of the sample images required for training the image segmentation model is reached), the computer device may train the image segmentation model by using all the sample images in the image annotation library and the image annotation result corresponding to each sample image.
In the sample image annotation method provided by this embodiment, a computer device firstly inputs a sample image to be annotated into a segmentation annotation model to obtain an image annotation result; and then the computer equipment judges whether the sample image to be labeled executes the training process of the segmentation labeling model or not based on the training identification of the sample image to be labeled, and if so, stores the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard for training the image segmentation model. It can be known that the training data finally used for training the image segmentation model are all obtained from the image annotation library, and the image annotation result in the image annotation library is output by the segmentation annotation model and meets the image annotation standard, so that the problems of the difference of annotations among different marking personnel, the discontinuity and unsmooth boundary between the upper and lower images during artificial annotation and the like can be avoided. Therefore, the method of the embodiment can greatly improve the quality of the labeling data, and further improve the performance of the image segmentation model obtained by training.
In an embodiment, the image annotation standard includes different annotation quality indexes and standard quantization values corresponding to the annotation quality indexes, and the embodiment relates to a specific process of storing, by a computer device, a sample image to be annotated and an image annotation result to an image annotation library according to an image annotation result and a preset image annotation standard. Alternatively, as shown in fig. 3, the S103 may include:
s201, quantizing the image labeling result under different labeling quality indexes to obtain target quantized values under different labeling quality indexes.
Specifically, it is assumed that the different labeling quality indexes include an index a and an index B, the standard quantization value corresponding to the index a is 80, and the standard quantization value corresponding to the index B is 85, that is, when the quantization result of a certain image labeling result under the index a is greater than or equal to 80, the certain image labeling result is considered to be "qualified", and when the quantization result under the index B is greater than or equal to 85, the certain image labeling result is considered to be "qualified". Then, the computer device can quantize the image annotation result output by the segmentation annotation model under different annotation quality indexes, so as to obtain target quantization values under different annotation quality indexes.
Optionally, the annotation quality indicator may include an image annotation similarity indicator and/or an image connected domain indicator, and the computer device may calculate a similarity (Dice function) between the image annotation result and the image annotation gold standard; the image of the sample to be annotated is subjected to a training process of a segmentation annotation model, so that an image annotation gold standard corresponds to the image to be annotated during training, the computer device calculates the similarity between the image annotation gold standard and an image annotation result obtained after the image of the same sample to be annotated is segmented again, and the Dice function is an aggregate similarity measurement function, is usually used for calculating the similarity of two images (or samples or other types of data), and has a value range of [0,1 ]. Alternatively, the Dice function may be defined as Dice 2(Pt × Gt)/(Pt + Gt), where Pt denotes the image labeling result and Gt denotes the image labeling gold standard. And then the computer equipment determines a target quantization value of the image annotation result under the image annotation similarity index based on the obtained similarity and the similarity threshold, wherein on the basis that the obtained similarity is larger than the similarity threshold, the larger the value is, the higher the target quantization value is, and the closer the image annotation result is to the image annotation gold standard is, the closer the image annotation result is to the image annotation gold standard is. For example, different similarity thresholds are set for different segmentation tasks, a larger target such as a pneumonia region, two lungs, a liver and the like is segmented, and the similarity threshold may be set to 0.98; the segmentation target is smaller than the target, such as a lung nodule, and the similarity threshold can be set to 0.93.
For the image connected domain index, the computer device may determine a target connected domain based on the image labeling result, where the connected domain is an image region composed of foreground pixels having the same pixel value and adjacent positions in the image, and the points connected with each other form one region, and the points not connected with each other form different regions. And then the computer equipment determines the difference between the target connected domain and the connected domain golden standard, wherein the connected domain golden standard can also be marked when the training process of the segmentation marking model is executed on the sample image to be marked, and exemplarily, if the obtained target connected domains are 4 and the connected domain golden standard is 5, the difference is 1. Next, the computer equipment determines a target quantization value of the image annotation result under the index of the image connected domain according to the difference and the difference threshold; if the obtained difference is smaller than the difference threshold, the smaller the value, the higher the target quantization, for example, the number of the target connected component is equal to the standard number of the connected component (i.e., the difference is 0), which indicates that the image labeling result is substantially error-free (the target quantization value may be 100), no over-segmentation generates a new false positive region, no under-segmentation generates some target segmentation regions, and the like. For example, different difference thresholds may be set for different connected domain segmentation tasks, such as pneumonia, most segmentation targets are multiple connected domains, the difference threshold may be set to be 3, and the like.
S202, if at least one of the target quantization values under different annotation quality indexes is larger than the corresponding standard quantization value, storing the sample image to be annotated and the image annotation result into an image annotation library.
Specifically, after obtaining target quantization values under different annotation quality indexes, the computer device compares the target quantization values with corresponding standard quantization values, if at least one of the target quantization values is greater than the corresponding standard quantization value, the image annotation result obtained this time is represented as "qualified", and then the sample image to be annotated and the image annotation result can be stored in an image annotation library. For example, assuming that the number of the annotation quality indexes is 1, when the target quantization value of the image annotation result under the index is greater than the standard quantization value, the image annotation result is considered to be "qualified"; if the number of the labeling quality indexes is 2 (such as A and B), the target quantization value of the image labeling result under the index A is larger than the corresponding standard quantization value, or the target quantization value of the image labeling result under the index B is larger than the corresponding standard quantization value, or both the target quantization values of the image labeling result under the index A and the index B are larger than the corresponding standard quantization value, the image labeling result can be considered to be qualified.
In the method for annotating a sample image provided by this embodiment, a computer device quantizes an image annotation result under different annotation quality indexes to obtain target quantization values under different annotation quality indexes, and if at least one of the target quantization values under the different annotation quality indexes is greater than a corresponding standard quantization value, the sample image to be annotated and the image annotation result are stored in an image annotation library. That is to say, the image data stored in the image annotation library for training the image segmentation model is stored after being subjected to standard auditing, so that the quality of the annotation data is further improved, and the performance of the image segmentation model obtained by training is further improved; and meanwhile, the difference of labeling among different marking persons is reduced.
The foregoing embodiment describes a case where a sample image to be labeled performs a training process of an over-segmentation labeling model, and the following describes a case where a sample image to be labeled does not perform a training process of an over-segmentation labeling model, and optionally, as shown in fig. 4, the method further includes:
s301, if the training process of the segmentation annotation model is not executed on the sample image to be annotated, acquiring an image annotation result obtained after the image annotation result is modified by a user.
Specifically, if the to-be-annotated sample image has not been subjected to the training process of the segmentation annotation model, that is, the segmentation annotation model has not learned the image features of the to-be-annotated sample image, the error of the output image annotation result may be large, and then a user (for example, an experienced doctor) may modify the error by observing the display page of the image annotation result, and then the computer device may receive the image annotation result after the user modifies the image annotation result.
S302, based on the sample image to be labeled and the modified image labeling result, executing a training process on the segmentation labeling model, and changing the training identification of the sample image to be labeled.
Specifically, based on the sample image to be labeled and the image labeling result modified by the user, executing the training process on the segmentation labeling model again, and changing the training identifier of the sample image to be labeled from 0 to 1; it should be noted that, the image annotation result modified by the user here can be used as the image annotation gold standard or the connected domain gold standard in the above embodiments. At this time, after the training process of segmenting the annotation model is executed on the sample image to be annotated, when the sample image to be annotated is obtained again, the step S101 may be executed to obtain a new image annotation result, compare the new image annotation result with the image annotation standard, and determine whether the image annotation standard is met, so as to determine that the new image annotation result can be added to the image annotation library.
It can be known that, for all sample images to be labeled, no matter whether the training process of the over-segmentation labeling model is executed before, the training process is finally executed and compared with the image labeling standard to be stored in the image labeling library. The image data in the image annotation library is output by the segmentation annotation model and meets the image annotation standard, so that the quality of the annotation data can be greatly improved, and the performance of the image segmentation model obtained by training is further improved.
For the above embodiment, the segmentation annotation model is usually a model after being trained (or pre-trained) to some extent, but in some scenarios, the segmentation annotation model may be a newly built model and not subjected to any training process. Then in this scenario, as shown in fig. 5, the method further includes:
s401, judging whether a segmentation labeling model executing a training process exists at present.
S402, if yes, inputting the sample image to be labeled into the segmentation labeling model to obtain an image labeling result.
S403, if not, acquiring a reference annotation result of the sample image to be annotated by the user, and executing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; and changing the training identification of the sample image to be marked.
Specifically, after a sample image to be annotated is acquired, the computer device first judges whether a segmentation annotation model which has been previously trained exists, and if so, directly executes the step S101, that is, the sample image to be annotated is input into the segmentation annotation model to obtain an image annotation result; if not, labeling the sample image to be labeled by a user (such as an experienced doctor) to obtain a reference labeling result, acquiring the reference labeling result by the computer equipment, executing a training process on an initial segmentation labeling model (such as a newly built model) based on the sample image to be labeled and the reference labeling result to obtain a pre-trained segmentation labeling model, and changing the training identifier of the sample image to be labeled from 0 to 1; it should be noted that, the reference annotation result annotated by the user here can be used as the image annotation gold standard or the connected domain gold standard in the above embodiments. Then, a segmentation annotation model which has been trained exists, and when the sample image to be annotated is obtained again, the step S101 may be executed to obtain an image annotation result, compare the image annotation result with an image annotation standard, and determine whether the image annotation standard is met, so as to determine that the image annotation library can be added.
It can be known that, for whether the segmentation annotation model which has been executed in the training process exists at present, the segmentation annotation model can be obtained through a series of training, and for all sample images to be annotated, the training process can be executed finally and compared with the image annotation standard to be stored in the image annotation library regardless of whether the training process of the segmentation annotation model is executed before. The image data in the image annotation library is output by the segmentation annotation model and meets the image annotation standard, so that the quality of the annotation data can be greatly improved, and the performance of the image segmentation model obtained by training is further improved.
To better understand the whole process of the above sample image labeling method, the method is described again below, and as shown in fig. 6, the method may include:
s501, obtaining a sample image to be marked;
s502, judging whether a segmentation labeling model executing a training process exists at present;
s503, if yes, inputting the sample image to be annotated into the segmentation annotation model to obtain an image annotation result; and executing S505;
s504, if not, acquiring a reference annotation result of the sample image to be annotated by the user, and executing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; changing the training mark of the sample image to be marked; and returns to execute S501;
s505, judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
s506, if yes, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard;
s507, if not, acquiring an image annotation result obtained after the image annotation result is modified by the user; based on the sample image to be labeled and the modified image labeling result, executing a training process on the segmentation labeling model, and changing the training identification of the sample image to be labeled; and returns to execution S501.
For the implementation process of each step in this embodiment, reference may be made to the description in the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 7, there is provided a sample image annotation device, including: a segmentation labeling module 11, a judging module 12 and a storage module 13.
Specifically, the segmentation and annotation module 11 is configured to input a sample image to be annotated into a segmentation and annotation model, so as to obtain an image annotation result.
The judging module 12 is configured to judge whether the training process of the segmentation labeling model is executed on the sample image to be labeled based on the training identifier of the sample image to be labeled.
The storage module 13 is configured to, when the to-be-annotated sample image performs a training process of segmenting the annotation model, store the to-be-annotated sample image and the image annotation result into an image annotation library according to the image annotation result and a preset image annotation standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
The sample image annotation device provided in this embodiment can implement the method embodiments described above, and the implementation principle and technical effect are similar, which are not described herein again.
In one embodiment, the image annotation standard comprises different annotation quality indexes and standard quantized values corresponding to the annotation quality indexes; the storage module 13 is specifically configured to quantize the image annotation result under different annotation quality indexes to obtain target quantization values under different annotation quality indexes; and if at least one of the target quantization values under different labeling quality indexes is larger than the corresponding standard quantization value, storing the sample image to be labeled and the image labeling result into an image labeling library.
In one embodiment, the annotation quality index comprises an image annotation similarity index and/or an image connected domain index; the storage module 13 is specifically configured to calculate a similarity between the image annotation result and the image annotation gold standard; determining a target quantization value of the image annotation result under the image annotation similarity index based on the similarity and the similarity threshold; and/or determining a target connected domain based on the image annotation result, and determining the difference between the target connected domain and the connected domain gold standard; and determining a target quantization value of the image annotation result under the index of the image connected domain according to the difference and the difference threshold.
In one embodiment, the apparatus further includes a training module, configured to obtain an image annotation result obtained after a user modifies an image annotation result if a training process of segmenting an annotation model is not performed on a sample image to be annotated; and executing a training process on the segmentation and annotation model and changing the training identifier of the sample image to be annotated based on the sample image to be annotated and the modified image annotation result.
In an embodiment, the determining module 12 is further configured to determine whether a segmentation labeling model executing a training process exists currently; if so, indicating the segmentation and annotation module 11 to input the sample image to be annotated into the segmentation and annotation model to obtain an image annotation result; if not, instructing the training module to acquire a reference annotation result of the sample image to be annotated by the user, and performing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; and changing the training identification of the sample image to be marked.
In an embodiment, the training module is further configured to train the image segmentation model by using all sample images in the image annotation library and the image annotation result corresponding to each sample image when the number of sample images in the image annotation library reaches a preset threshold.
In one embodiment, the training identifier of the sample image to be labeled comprises 0 and 1; 0 represents the training process that the sample image to be labeled is not subjected to the segmentation labeling model; 1, representing a training process of executing an over-segmentation labeling model on a sample image to be labeled.
For specific limitations of the sample image annotation device, reference may be made to the above limitations of the sample image annotation method, which are not described herein again. The modules in the sample image labeling apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a sample image annotation method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, the image annotation standard comprises different annotation quality indexes and standard quantized values corresponding to the annotation quality indexes; the processor, when executing the computer program, further performs the steps of:
quantizing the image labeling result under different labeling quality indexes to obtain target quantization values under different labeling quality indexes;
and if at least one of the target quantization values under different labeling quality indexes is larger than the corresponding standard quantization value, storing the sample image to be labeled and the image labeling result into an image labeling library.
In one embodiment, the annotation quality index comprises an image annotation similarity index and/or an image connected domain index; the processor, when executing the computer program, further performs the steps of:
calculating the similarity between the image annotation result and the image annotation gold standard;
determining a target quantization value of the image annotation result under the image annotation similarity index based on the similarity and the similarity threshold; and/or the presence of a gas in the gas,
determining a target connected domain based on the image annotation result, and determining the difference between the target connected domain and the connected domain gold standard;
and determining a target quantization value of the image annotation result under the index of the image connected domain according to the difference and the difference threshold.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
if the training process of segmenting the annotation model is not executed on the sample image to be annotated, acquiring an image annotation result obtained after the image annotation result is modified by a user;
and executing a training process on the segmentation and annotation model and changing the training identifier of the sample image to be annotated based on the sample image to be annotated and the modified image annotation result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
judging whether a segmentation labeling model executing a training process exists at present;
if so, inputting the sample image to be labeled into the segmentation labeling model to obtain an image labeling result;
if not, acquiring a reference annotation result of the sample image to be annotated by the user, and executing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; and changing the training identification of the sample image to be marked.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and when the number of the sample images in the image annotation library reaches a preset threshold value, training the image segmentation model by adopting all the sample images in the image annotation library and the image annotation result corresponding to each sample image.
In one embodiment, the training identifier of the sample image to be labeled comprises 0 and 1; 0 represents the training process that the sample image to be labeled is not subjected to the segmentation labeling model; 1, representing a training process of executing an over-segmentation labeling model on a sample image to be labeled.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the over-segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and image data in an image annotation library is used for training an image segmentation model.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
In one embodiment, the image annotation standard comprises different annotation quality indexes and standard quantized values corresponding to the annotation quality indexes; the computer program when executed by the processor further realizes the steps of:
quantizing the image labeling result under different labeling quality indexes to obtain target quantization values under different labeling quality indexes;
and if at least one of the target quantization values under different labeling quality indexes is larger than the corresponding standard quantization value, storing the sample image to be labeled and the image labeling result into an image labeling library.
In one embodiment, the annotation quality index comprises an image annotation similarity index and/or an image connected domain index; the computer program when executed by the processor further realizes the steps of:
calculating the similarity between the image annotation result and the image annotation gold standard;
determining a target quantization value of the image annotation result under the image annotation similarity index based on the similarity and the similarity threshold; and/or the presence of a gas in the gas,
determining a target connected domain based on the image annotation result, and determining the difference between the target connected domain and the connected domain gold standard;
and determining a target quantization value of the image annotation result under the index of the image connected domain according to the difference and the difference threshold.
In one embodiment, the computer program when executed by the processor further performs the steps of:
if the training process of segmenting the annotation model is not executed on the sample image to be annotated, acquiring an image annotation result obtained after the image annotation result is modified by a user;
and executing a training process on the segmentation and annotation model and changing the training identifier of the sample image to be annotated based on the sample image to be annotated and the modified image annotation result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether a segmentation labeling model executing a training process exists at present;
if so, inputting the sample image to be labeled into the segmentation labeling model to obtain an image labeling result;
if not, acquiring a reference annotation result of the sample image to be annotated by the user, and executing a training process on the initial segmentation annotation model based on the sample image to be annotated and the reference annotation result to obtain a pre-trained segmentation annotation model; and changing the training identification of the sample image to be marked.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and when the number of the sample images in the image annotation library reaches a preset threshold value, training the image segmentation model by adopting all the sample images in the image annotation library and the image annotation result corresponding to each sample image.
In one embodiment, the training identifier of the sample image to be labeled comprises 0 and 1; 0 represents the training process that the sample image to be labeled is not subjected to the segmentation labeling model; 1, representing a training process of executing an over-segmentation labeling model on a sample image to be labeled.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A sample image annotation method is characterized by comprising the following steps:
inputting a sample image to be annotated into a segmentation annotation model to obtain an image annotation result;
judging whether the sample image to be labeled executes the training process of the segmentation labeling model or not based on the training identification of the sample image to be labeled;
if so, storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard; the image annotation standard is used for representing the quality of an image annotation result, and the image data in the image annotation library is used for training the image segmentation model.
2. The method according to claim 1, wherein the image annotation standard comprises different annotation quality indicators and standard quantization values corresponding to the annotation quality indicators; the step of storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard comprises the following steps:
quantizing the image labeling result under different labeling quality indexes to obtain target quantized values under different labeling quality indexes;
and if at least one of the target quantized values under different labeling quality indexes is larger than the corresponding standard quantized value, storing the sample image to be labeled and the image labeling result into the image labeling library.
3. The method of claim 2, wherein the annotation quality indicator comprises an image annotation similarity indicator and/or an image connected domain indicator; the quantifying the image labeling result under the different labeling quality indexes to obtain target quantification values under the different labeling quality indexes comprises the following steps:
calculating the similarity between the image annotation result and the image annotation gold standard;
determining a target quantization value of the image labeling result under the image labeling similarity index based on the similarity and a similarity threshold; and/or the presence of a gas in the gas,
determining a target connected domain based on the image annotation result, and determining the difference between the target connected domain and a connected domain gold standard;
and determining a target quantization value of the image labeling result under the index of the image connected domain according to the difference and the difference threshold.
4. The method according to any one of claims 1-3, further comprising:
if the training process of the segmentation annotation model is not executed on the sample image to be annotated, acquiring an image annotation result obtained after the image annotation result is modified by a user;
and executing a training process on the segmentation labeling model and changing a training identifier of the sample image to be labeled based on the sample image to be labeled and the modified image labeling result.
5. The method according to any one of claims 1 to 3, wherein before the step of inputting the sample image to be labeled into the segmentation labeling model to obtain the image labeling result, the method further comprises:
judging whether a segmentation labeling model executing a training process exists at present;
if so, inputting the sample image to be labeled into the segmentation labeling model to obtain an image labeling result;
if not, acquiring a reference marking result of the user on the sample image to be marked, and executing a training process on the initial segmentation marking model based on the sample image to be marked and the reference marking result to obtain a pre-trained segmentation marking model; and changing the training identifier of the sample image to be marked.
6. The method of claim 1, further comprising:
and when the number of the sample images in the image annotation library reaches a preset threshold value, training the image segmentation model by adopting all the sample images in the image annotation library and the image annotation result corresponding to each sample image.
7. The method according to claim 1, wherein the training identifiers of the sample images to be labeled comprise 0 and 1; the 0 represents that the sample image to be labeled has not been subjected to the training process of the segmentation labeling model;
and the step 1 represents that the sample image to be labeled is subjected to a training process of the segmentation labeling model.
8. A sample image annotation apparatus, characterized in that the apparatus comprises:
the segmentation and annotation module is used for inputting the sample image to be annotated into the segmentation and annotation model to obtain an image annotation result;
the judging module is used for judging whether the training process of the segmentation labeling model is executed on the sample image to be labeled based on the training identification of the sample image to be labeled;
the storage module is used for storing the sample image to be labeled and the image labeling result into an image labeling library according to the image labeling result and a preset image labeling standard when the sample image to be labeled executes the training process of the segmentation labeling model; the image annotation standard is used for representing the quality of an image annotation result, and the image data in the image annotation library is used for training the image segmentation model.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010335044.5A CN111583199B (en) | 2020-04-24 | 2020-04-24 | Sample image labeling method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010335044.5A CN111583199B (en) | 2020-04-24 | 2020-04-24 | Sample image labeling method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583199A true CN111583199A (en) | 2020-08-25 |
CN111583199B CN111583199B (en) | 2023-05-26 |
Family
ID=72112615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010335044.5A Active CN111583199B (en) | 2020-04-24 | 2020-04-24 | Sample image labeling method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583199B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348826A (en) * | 2020-10-26 | 2021-02-09 | 陕西科技大学 | Interactive liver segmentation method based on geodesic distance and V-net |
CN112767307A (en) * | 2020-12-28 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN112786189A (en) * | 2021-01-05 | 2021-05-11 | 重庆邮电大学 | Intelligent diagnosis system for new coronary pneumonia based on deep learning |
CN113744288A (en) * | 2021-11-04 | 2021-12-03 | 北京欧应信息技术有限公司 | Method, apparatus, and medium for generating annotated sample images |
CN114119645A (en) * | 2021-11-25 | 2022-03-01 | 推想医疗科技股份有限公司 | Method, system, device and medium for determining image segmentation quality |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436583A (en) * | 2011-09-26 | 2012-05-02 | 哈尔滨工程大学 | Image segmentation method based on annotated image learning |
CN109741346A (en) * | 2018-12-30 | 2019-05-10 | 上海联影智能医疗科技有限公司 | Area-of-interest exacting method, device, equipment and storage medium |
CN109902672A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | Image labeling method and device, storage medium, computer equipment |
WO2019137196A1 (en) * | 2018-01-11 | 2019-07-18 | 阿里巴巴集团控股有限公司 | Image annotation information processing method and device, server and system |
CN110135425A (en) * | 2018-02-09 | 2019-08-16 | 北京世纪好未来教育科技有限公司 | Sample labeling method and computer storage medium |
CN110378438A (en) * | 2019-08-07 | 2019-10-25 | 清华大学 | Training method, device and the relevant device of Image Segmentation Model under label is fault-tolerant |
CN110610193A (en) * | 2019-08-12 | 2019-12-24 | 大箴(杭州)科技有限公司 | Method and device for processing labeled data |
-
2020
- 2020-04-24 CN CN202010335044.5A patent/CN111583199B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436583A (en) * | 2011-09-26 | 2012-05-02 | 哈尔滨工程大学 | Image segmentation method based on annotated image learning |
WO2019137196A1 (en) * | 2018-01-11 | 2019-07-18 | 阿里巴巴集团控股有限公司 | Image annotation information processing method and device, server and system |
CN110135425A (en) * | 2018-02-09 | 2019-08-16 | 北京世纪好未来教育科技有限公司 | Sample labeling method and computer storage medium |
CN109741346A (en) * | 2018-12-30 | 2019-05-10 | 上海联影智能医疗科技有限公司 | Area-of-interest exacting method, device, equipment and storage medium |
CN109902672A (en) * | 2019-01-17 | 2019-06-18 | 平安科技(深圳)有限公司 | Image labeling method and device, storage medium, computer equipment |
CN110378438A (en) * | 2019-08-07 | 2019-10-25 | 清华大学 | Training method, device and the relevant device of Image Segmentation Model under label is fault-tolerant |
CN110610193A (en) * | 2019-08-12 | 2019-12-24 | 大箴(杭州)科技有限公司 | Method and device for processing labeled data |
Non-Patent Citations (2)
Title |
---|
XIAO KE 等: "end-to-end automatic image annotation based on deep CNN and multi-label data augmentation" * |
王龙: "基于视觉注意机制与支持向量机自动图像标注" * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348826A (en) * | 2020-10-26 | 2021-02-09 | 陕西科技大学 | Interactive liver segmentation method based on geodesic distance and V-net |
CN112348826B (en) * | 2020-10-26 | 2023-04-07 | 陕西科技大学 | Interactive liver segmentation method based on geodesic distance and V-net |
CN112767307A (en) * | 2020-12-28 | 2021-05-07 | 上海联影智能医疗科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN112786189A (en) * | 2021-01-05 | 2021-05-11 | 重庆邮电大学 | Intelligent diagnosis system for new coronary pneumonia based on deep learning |
CN112786189B (en) * | 2021-01-05 | 2022-07-01 | 重庆邮电大学 | Intelligent diagnosis system for new coronary pneumonia based on deep learning |
CN113744288A (en) * | 2021-11-04 | 2021-12-03 | 北京欧应信息技术有限公司 | Method, apparatus, and medium for generating annotated sample images |
CN113744288B (en) * | 2021-11-04 | 2022-01-25 | 北京欧应信息技术有限公司 | Method, apparatus, and medium for generating annotated sample images |
CN114119645A (en) * | 2021-11-25 | 2022-03-01 | 推想医疗科技股份有限公司 | Method, system, device and medium for determining image segmentation quality |
CN114119645B (en) * | 2021-11-25 | 2022-10-21 | 推想医疗科技股份有限公司 | Method, system, device and medium for determining image segmentation quality |
Also Published As
Publication number | Publication date |
---|---|
CN111583199B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583199B (en) | Sample image labeling method, device, computer equipment and storage medium | |
JP7279015B2 (en) | Evaluation of density in mammography | |
CN109872306B (en) | Medical image segmentation method, device and storage medium | |
CN111445449B (en) | Method, device, computer equipment and storage medium for classifying region of interest | |
CN110310256B (en) | Coronary stenosis detection method, coronary stenosis detection device, computer equipment and storage medium | |
CN111640093B (en) | Medical image quality control method and computer readable storage medium | |
CN113724185B (en) | Model processing method, device and storage medium for image classification | |
CN111709485B (en) | Medical image processing method, device and computer equipment | |
CN111369542A (en) | Blood vessel marking method, image processing system and storage medium | |
CN112614144A (en) | Image segmentation method, device, equipment and storage medium | |
CN112151179B (en) | Image data evaluation method, device, equipment and storage medium | |
CN111325714A (en) | Region-of-interest processing method, computer device and readable storage medium | |
CN110046707B (en) | Evaluation optimization method and system of neural network model | |
CN113192031B (en) | Vascular analysis method, vascular analysis device, vascular analysis computer device, and vascular analysis storage medium | |
CN110298820A (en) | Image analysis methods, computer equipment and storage medium | |
CN111209946B (en) | Three-dimensional image processing method, image processing model training method and medium | |
CN111223158B (en) | Artifact correction method for heart coronary image and readable storage medium | |
CN111724371A (en) | Data processing method and device and electronic equipment | |
CN112102235B (en) | Human body part recognition method, computer device, and storage medium | |
CN111489318A (en) | Medical image enhancement method and computer-readable storage medium | |
CN115861255A (en) | Model training method, device, equipment, medium and product for image processing | |
Pérez-García et al. | RadEdit: stress-testing biomedical vision models via diffusion image editing | |
CN113160199B (en) | Image recognition method and device, computer equipment and storage medium | |
CN111951316B (en) | Image quantization method and storage medium | |
CN112530554B (en) | Scanning positioning method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |