[go: up one dir, main page]

CN112801940B - Model evaluation method, device, equipment and medium - Google Patents

Model evaluation method, device, equipment and medium Download PDF

Info

Publication number
CN112801940B
CN112801940B CN202011626216.0A CN202011626216A CN112801940B CN 112801940 B CN112801940 B CN 112801940B CN 202011626216 A CN202011626216 A CN 202011626216A CN 112801940 B CN112801940 B CN 112801940B
Authority
CN
China
Prior art keywords
model
prediction result
labeling
image data
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011626216.0A
Other languages
Chinese (zh)
Other versions
CN112801940A (en
Inventor
刘应龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority to CN202011626216.0A priority Critical patent/CN112801940B/en
Publication of CN112801940A publication Critical patent/CN112801940A/en
Priority to US17/559,473 priority patent/US20220207742A1/en
Application granted granted Critical
Publication of CN112801940B publication Critical patent/CN112801940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a model evaluation method, a device, equipment and a medium. The method comprises the following steps: acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data. According to the technical scheme, the problem that evaluation of the segmentation model is easily influenced by subjectivity through medical professionals is solved, the labor cost is reduced, and the accuracy of model evaluation is improved.

Description

Model evaluation method, device, equipment and medium
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to a model evaluation method, a device, equipment and a medium.
Background
Artificial intelligence has evolved rapidly in the medical field over the last decade, benefiting largely from the development and advancement of machine learning technology, which has also become a new engine of innovation in the medical field. Unlike traditional machine learning techniques, whose function is largely limited by their shallow structure, deep learning mimics the deep tissue structure of the human brain in that it can process and express information from multiple levels. Therefore, the image segmentation network based on the deep learning technology is widely applied to the field of medical imaging.
Since deep learning is a supervised machine learning algorithm, a deep learning model needs to be trained and the performance of the model needs to be evaluated based on a large amount of labeled data (also commonly called "gold standard" in medical images), and the accuracy and generalization of the model are intended to be improved, so that demands of scientific research and engineering personnel on the labeled data are larger and larger, and the quality requirements on the labeled data are also higher and larger, and a large amount of manpower is needed to label the data.
When evaluating the accuracy and other performances of the trained machine learning model, the output result judgment of the model by a medical expert can be generally organized. However, the evaluation scheme occupies high labor cost, and the judgment of medical professionals on disease results is subjectivity due to the influence of experience and cognition, so that the evaluation scheme of the machine learning model in the prior art needs to be improved.
Disclosure of Invention
The embodiment of the invention provides a model evaluation method, device, equipment and medium, which are used for optimizing the evaluation of a machine learning model, reducing the labor cost and improving the evaluation accuracy.
In a first aspect, an embodiment of the present invention provides a method for evaluating a model, where the method includes:
Acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data;
Inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
In a second aspect, an embodiment of the present invention further provides a model evaluation device, where the device includes:
The first prediction result acquisition module is used for acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data;
The evaluation result acquisition module is used for inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
In a third aspect, an embodiment of the present invention further provides a model evaluation device, where the model evaluation device includes:
one or more processors;
A storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the model evaluation method as provided by any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements a model evaluation method as provided by any embodiment of the present invention.
According to the technical scheme, first image data are acquired and input into the trained segmentation model, so that a first prediction result is obtained; the trained segmentation model is obtained by training a sample set of image data and labeling data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data, and evaluating the segmentation model through the regression model, so that the problem that the evaluation of the segmentation model by medical experts is easily influenced by subjectivity is solved, and the effects of reducing labor cost and improving accuracy of model evaluation are achieved.
Drawings
FIG. 1 is a flow chart of a method for model evaluation in accordance with a first embodiment of the present invention;
FIG. 2 is a flow chart of a model evaluation method in a second embodiment of the present invention;
FIG. 3 is a block diagram of a model evaluation apparatus in a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a model evaluating apparatus in a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of model evaluation provided in an embodiment of the present invention, where the embodiment is applicable to a case of evaluating a trained model, the method may be performed by a model evaluation device, and specifically includes the following steps:
s110, acquiring first image data, and inputting the first image data into a trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data.
After the training of the segmentation model is completed through the training data, the performance of the segmentation model is required to be tested through the test data which is not trained, the performance of the segmentation model is required to be met through the test, the segmentation processing of the image can be carried out through the model, and if the test is not passed, the network parameters of the segmentation model are adjusted according to the test data and the output result of the test data, and the training of the segmentation model is continued until the target performance is achieved.
Optionally, the segmentation model is used for labeling the region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region. The segmentation model can segment the region of interest and output the segmented region of interest as region of interest image information; the focus type of the focus area can also be marked, and the focus area is a malignant or benign result or a liver focus area or a heart focus area is marked by way of example.
Generally, when a segmentation model is tested by using test data, an interested region of the test data is required to be marked, the segmentation model is tested by using the test data and corresponding marking information, the test data is input into the segmentation model to obtain a prediction marking result, and the performance of the segmentation model is tested by marking the prediction result and the actual marking information. However, labeling the test results wastes a certain amount of manpower, and reduces the training efficiency of the segmentation model. The present embodiment thus tests the performance of the segmentation model by means of a regression model.
Training the segmentation model through the image data and the sample set corresponding to the labeling data, calculating a loss function between training output of the segmentation model and the corresponding labeling data, transmitting the loss function into the segmentation model through a back propagation algorithm, and adjusting network parameters in the segmentation model based on a gradient descent method. And iteratively executing the training method until the training of the preset times is completed or the segmentation precision of the segmentation model reaches the preset precision, and determining that the training of the segmentation model is completed. Alternatively, the first image data may be sample data for testing a trained segmentation model. And inputting the first image data into the trained segmentation model to obtain a first prediction result, wherein the first prediction result is a segmentation result of the segmentation model on the first image data.
S120, inputting a first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
And inputting the first prediction result into a regression model, wherein the output result of the regression model is the evaluation result of the segmentation model, and the evaluation result reflects the similarity between the first prediction result and corresponding real labeling data.
Optionally, the model evaluation method further includes a training process of the regression model, and the training process specifically includes: acquiring second image data, and labeling the region of interest of the second image data to obtain a labeling result; alternatively, the second image data may be sample data for training a segmentation model to be trained. Inputting second image data into the trained segmentation model to obtain a second prediction result; and training the regression model by taking the labeling result and the second prediction result as samples. And inputting the second image data for training the segmentation model into the trained segmentation model, and outputting a second prediction result, wherein the second prediction result is the segmentation result of the second image data. And acquiring a labeling result corresponding to the second image data, and training the regression model based on the labeling result and the second prediction result.
Optionally, training the regression model by using the labeling result and the second prediction result as samples includes: acquiring a measurement index based on the labeling result and the second prediction result, wherein the measurement index reflects the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data; and training the regression model by taking the measurement index and the second prediction result as samples. Alternatively, the metric index may be obtained by performing an independent calculation according to the second prediction result and the corresponding labeling result, or may be obtained by using a segmentation model. The distribution rule can be the shape and the position of the region of interest, and can also be the position of focus labeling and the result of focus labeling. And training a regression model by taking the measurement index obtained based on the second prediction result and the corresponding labeling result and the second prediction result as a sample. And inputting the second prediction result into a regression model to be trained to obtain a prediction measurement index, calculating a loss function according to the prediction measurement index and the real measurement index, and when the measurement index comprises 3 indexes of index 1, index 2 and index 3, respectively calculating the loss functions of the three indexes, adding the loss functions of the three indexes to obtain a target loss function, and optionally setting weights for the loss functions corresponding to the three measurement indexes according to the requirement, and respectively multiplying the loss functions corresponding to the three measurement indexes by the weights and then adding to obtain the target loss function. And reversely inputting the target loss function into the regression model, and adjusting network parameters in the regression model based on a gradient descent method. Alternatively, the loss function may employ a Huber loss function. And iteratively executing the training method until the training of the preset times is completed or the accuracy of the output measurement index of the regression model reaches the preset accuracy, and determining that the training of the regression model is completed.
Optionally, training the regression model by using the metric and the second prediction result as samples includes: extracting features of the second prediction result to obtain second feature information; and training a regression model to be trained based on the second characteristic information and the measurement index. Optionally, feature extraction is performed on the second prediction result, so as to obtain feature information of the second prediction result. For example, when the second prediction result is image information of the region of interest, the second feature information includes: the size of the area of the region of interest, the first distance of the image data of the current region of interest and the position information of the current region of interest in the corresponding image data; wherein the first distance is a distance between the current image data and first frame image data in the same batch of image data. The image data of the same batch belongs to the scanning result of the same medical image equipment, and the image data of the same batch have the same scanning parameters and scanning positions.
Optionally, the metric further includes: accuracy, sensitivity and specificity calculated by labeling the results and second predicted results. The measurement index comprises accuracy, sensitivity and specificity calculated by the labeling result and the second prediction result besides the similarity of the labeling result distribution rule and the second prediction result distribution rule. The accuracy is obtained by counting the number of differences between the second predicted result and the labeling result and dividing the number of differences by the total number of the second predicted result; the sensitivity reflects the recognition capability of the regression model on input data, the probability of missing labels of the corresponding segmentation model with higher sensitivity is lower, one prediction result in the second prediction result is taken, the number of pixels marked on a region of interest in the prediction result is 100, 70 pixels are matched with the pixels of the corresponding marking result, 3 pixels are different from the pixels of the corresponding marking result, 6 pixels in the completely matched 7 pixels are true matches, 1 pixel is false complete matches, 2 pixels in the 3 pixels with differences are true differences, one pixel is false differences, the number of pixels with sensitivity being true differences is divided by the total number of pixels with differences, the number of pixels with specificity being true complete matches is 67%, the higher the specificity is, the higher the accuracy of the output result of the segmentation model is represented, and the second characteristic information and the measurement index are used as samples to train the regression model to be trained. And the corresponding relation between the characteristic information and the measurement index is acquired through the regression model, so that the classification model can conveniently acquire the measurement index of the first prediction result according to the corresponding relation, and further evaluate the classification model.
According to the technical scheme, first image data are acquired and input into the trained segmentation model, so that a first prediction result is obtained; the trained segmentation model is obtained by training a sample set of image data and labeling data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data, and evaluating the segmentation model through the regression model, so that the problem that the evaluation of the segmentation model by medical experts is easily influenced by subjectivity is solved, and the effects of reducing labor cost and improving accuracy of model evaluation are achieved.
Example two
Fig. 2 is a flowchart of model evaluation provided in an embodiment of the present invention, where the embodiment is further refined based on the previous embodiment, and the inputting the first prediction result into the regression model includes: extracting features of the first prediction result to obtain first feature information; and inputting the first characteristic information into the regression model. And extracting the characteristics of the first prediction result, inputting the extracted characteristic information into a regression model, and outputting a measurement index corresponding to the characteristic information of the first prediction result by the regression model according to the corresponding relation between the characteristic information and the measurement index, so as to evaluate the segmentation model according to the measurement index of the first prediction result, and improve the efficiency and accuracy of model evaluation.
As shown in fig. 2, the method specifically comprises the following steps:
S210, acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data.
S220, extracting features of the first prediction result to obtain first feature information; inputting the first characteristic information into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
Alternatively, the feature extraction method for the first predicted result is the same as the feature extraction method for the second predicted result. When the first prediction result is the image information of the region of interest, the first feature information includes: the size of the area of the region of interest, the second distance of the image data of the current region of interest and the position information of the current region of interest in the corresponding image data; wherein the second distance is a distance between the current image data and the first frame of image data in the same batch of image data. The image data of the same batch belongs to the scanning result of the same medical image equipment, and the image data of the same batch have the same scanning parameters and scanning positions. When the first prediction result is focus type marking, the first characteristic information comprises the position of a focus in the image, the area size of the focus and the distance between the current focus image and first frame image data in the same batch of image data. The extracted characteristic information is input into the regression model, so that the regression model is more beneficial to outputting the measurement index corresponding to the characteristic information of the first prediction result according to the corresponding relation between the characteristic information and the measurement index, further evaluating the segmentation model according to the measurement index of the first prediction result, and improving the efficiency and accuracy of model evaluation.
According to the technical scheme, first image data are acquired and input into the trained segmentation model, so that a first prediction result is obtained; the trained segmentation model is obtained by training a sample set of image data and labeling data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and evaluating the performance of the segmentation model; extracting features of the first prediction result to obtain first feature information; inputting the first characteristic information into the regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the predicted result and the distribution rule of the marked data, and inputting the extracted characteristic information into the regression model, so that the regression model is more beneficial to outputting the measurement index corresponding to the characteristic information of the first predicted result according to the corresponding relation between the characteristic information and the measurement index, and the segmentation model is evaluated through the regression model, thereby solving the problem that the evaluation of the segmentation model by medical specialists is easily subjectivity influenced, and realizing the effects of reducing the labor cost and improving the accuracy of model evaluation.
Example III
Fig. 3 is a block diagram of a model evaluation device according to a third embodiment of the present invention, where the model evaluation device includes: a first predicted result acquisition module 310 and an evaluation result acquisition module 320.
The first prediction result obtaining module 310 is configured to obtain first image data, and input the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data; an evaluation result obtaining module 320, configured to input the first prediction result into a regression model, to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
In the technical solution of the foregoing embodiment, the model evaluation device further includes:
The model training module is used for acquiring second image data, and labeling the region of interest of the second image data to obtain a labeling result; inputting second image data into the trained segmentation model to obtain a second prediction result; and training the regression model by taking the labeling result and the second prediction result as samples.
In the technical solution of the foregoing embodiment, the model training module includes:
the measurement index calculation unit is used for obtaining measurement indexes based on the labeling result and the second prediction result, and the measurement indexes reflect the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
and the regression model training unit is used for training the regression model by taking the measurement index and the second prediction result as samples.
Optionally, the metric further includes: accuracy, sensitivity and specificity calculated by labeling the results and second predicted results.
In the technical solution of the foregoing embodiment, the regression model training unit includes:
The feature extraction subunit is used for carrying out feature extraction on the second prediction result to obtain second feature information;
and the regression model training subunit is used for training the regression model to be trained based on the second characteristic information and the measurement index.
In the technical solution of the foregoing embodiment, the evaluation result obtaining module 320 includes:
The first characteristic information acquisition unit is used for carrying out characteristic extraction on the first prediction result to obtain first characteristic information;
And the characteristic information input unit is used for inputting the first characteristic information into the regression model.
Optionally, the segmentation model is used for labeling the region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region.
According to the technical scheme, first image data are acquired and input into the trained segmentation model, so that a first prediction result is obtained; the trained segmentation model is obtained by training a sample set of image data and labeling data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data, and evaluating the segmentation model through the regression model, so that the problem that the evaluation of the segmentation model by medical experts is easily influenced by subjectivity is solved, and the effects of reducing labor cost and improving accuracy of model evaluation are achieved.
The model evaluation device provided by the embodiment of the invention can execute the model evaluation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 4 is a schematic structural diagram of a model evaluation device according to a fourth embodiment of the present invention, where, as shown in fig. 4, the model evaluation device includes a processor 410, a memory 420, an input device 430 and an output device 440; the number of processors 410 in the modeling apparatus may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, memory 420, input means 430 and output means 440 in the modeling apparatus may be connected by a bus or other means, for example by a bus connection in fig. 4.
The memory 420 is used as a computer readable storage medium, and may be used to store a software program, a computer executable program, and a module, such as program instructions/modules corresponding to the model evaluation method in the embodiment of the present invention (for example, the first prediction result obtaining module 310 and the evaluation result obtaining module 320 in the model evaluation device). The processor 410 executes various functional applications and data processing of the model evaluation device by running software programs, instructions and modules stored in the memory 420, i.e., implements the model evaluation method described above.
Memory 420 may include primarily a program storage area and a data storage area, wherein the program storage area may store an operating system, at least one application program required for functionality; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 420 may further include memory remotely located with respect to processor 410, which may be connected to the model evaluation device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 430 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the modeling apparatus. The output 440 may include a display device such as a display screen.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a model evaluation method, the method comprising:
Acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data;
Inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the model evaluation method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the model evaluation device, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A model evaluation method, comprising:
acquiring first image data, and inputting the first image data into a trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data;
inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
The method further comprises a training process of the regression model, and the training process specifically comprises the following steps:
acquiring second image data, and labeling the region of interest of the second image data to obtain a labeling result;
inputting second image data into the trained segmentation model to obtain a second prediction result;
training a regression model by taking the labeling result and the second prediction result as samples;
The training the regression model by using the labeling result and the second prediction result as samples includes:
acquiring a measurement index based on the labeling result and the second prediction result, wherein the measurement index reflects the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
And training the regression model by taking the measurement index and the second prediction result as samples.
2. The method of claim 1, wherein the metric further comprises: accuracy, sensitivity and specificity calculated by labeling the results and second predicted results.
3. The method of claim 1, wherein training the regression model using the metric and the second prediction result as samples comprises:
Extracting features of the second prediction result to obtain second feature information;
and training a regression model to be trained based on the second characteristic information and the measurement index.
4. The method of claim 1, wherein the inputting the first prediction result into a regression model comprises:
Extracting features of the first prediction result to obtain first feature information;
And inputting the first characteristic information into the regression model.
5. The method according to claim 1, wherein the segmentation model is used for labeling a region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region.
6. A model evaluation device, characterized by comprising:
The first prediction result acquisition module is used for acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and labeling data;
The evaluation result acquisition module is used for inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
wherein the apparatus further comprises:
The model training module is used for acquiring second image data, and labeling the region of interest of the second image data to obtain a labeling result; inputting second image data into the trained segmentation model to obtain a second prediction result; training a regression model by taking the labeling result and the second prediction result as samples;
wherein, the model training module includes:
the measurement index calculation unit is used for obtaining measurement indexes based on the labeling result and the second prediction result, and the measurement indexes reflect the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
and the regression model training unit is used for training the regression model by taking the measurement index and the second prediction result as samples.
7. A model evaluation apparatus, characterized in that the model evaluation apparatus comprises:
one or more processors;
A storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the model evaluation method of any one of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a model evaluation method according to any one of claims 1-5.
CN202011626216.0A 2020-12-30 2020-12-31 Model evaluation method, device, equipment and medium Active CN112801940B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011626216.0A CN112801940B (en) 2020-12-31 2020-12-31 Model evaluation method, device, equipment and medium
US17/559,473 US20220207742A1 (en) 2020-12-30 2021-12-22 Image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011626216.0A CN112801940B (en) 2020-12-31 2020-12-31 Model evaluation method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112801940A CN112801940A (en) 2021-05-14
CN112801940B true CN112801940B (en) 2024-07-02

Family

ID=75807709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011626216.0A Active CN112801940B (en) 2020-12-30 2020-12-31 Model evaluation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112801940B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119645B (en) * 2021-11-25 2022-10-21 推想医疗科技股份有限公司 Method, system, device and medium for determining image segmentation quality
CN114399546A (en) * 2021-11-30 2022-04-26 际络科技(上海)有限公司 Target detection method and device
CN114003511B (en) * 2021-12-24 2022-04-15 支付宝(杭州)信息技术有限公司 Evaluation method and device for model interpretation tool

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111340123A (en) * 2020-02-29 2020-06-26 韶鼎人工智能科技有限公司 Image score label prediction method based on deep convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726356B1 (en) * 2016-08-01 2020-07-28 Amazon Technologies, Inc. Target variable distribution-based acceptance of machine learning test data sets
CN107730087A (en) * 2017-09-20 2018-02-23 平安科技(深圳)有限公司 Forecast model training method, data monitoring method, device, equipment and medium
CN109753975B (en) * 2019-02-02 2021-03-09 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111340123A (en) * 2020-02-29 2020-06-26 韶鼎人工智能科技有限公司 Image score label prediction method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN112801940A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
US10789499B2 (en) Method for recognizing image, computer product and readable storage medium
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN112801940B (en) Model evaluation method, device, equipment and medium
Yoon et al. Tumor identification in colorectal histology images using a convolutional neural network
CN111161311A (en) A method and device for visual multi-target tracking based on deep learning
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN110705403A (en) Cell sorting method, cell sorting device, cell sorting medium, and electronic apparatus
US20170061608A1 (en) Cloud-based pathological analysis system and method
CN111882559B (en) ECG signal acquisition method and device, storage medium and electronic device
CN115034315B (en) Service processing method and device based on artificial intelligence, computer equipment and medium
CN113827240B (en) Emotion classification method, training device and training equipment for emotion classification model
CN109145955B (en) Method and system for wood identification
CN113656558A (en) Method and device for evaluating association rule based on machine learning
CN111507285A (en) Face attribute recognition method and device, computer equipment and storage medium
CN110968664A (en) Document retrieval method, device, equipment and medium
CN112614570A (en) Sample set labeling method, pathological image classification method and classification model construction method and device
CN117809124B (en) Medical image association calling method and system based on multi-feature fusion
CN108428234B (en) An interactive segmentation performance optimization method based on the evaluation of image segmentation results
Sameki et al. ICORD: Intelligent Collection of Redundant Data-A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently.
CN115482419B (en) Data acquisition and analysis method and system for marine fishery products
CN113537407B (en) Image data evaluation processing method and device based on machine learning
CN111582404B (en) Content classification method, device and readable storage medium
CN112508135B (en) Model training method, pedestrian attribute prediction method, device and equipment
CN112365474A (en) Blood vessel extraction method, device, electronic equipment and storage medium
CN112597328B (en) Labeling method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant