[go: up one dir, main page]

CN111402260A - Medical image segmentation method, system, terminal and storage medium based on deep learning - Google Patents

Medical image segmentation method, system, terminal and storage medium based on deep learning Download PDF

Info

Publication number
CN111402260A
CN111402260A CN202010095376.0A CN202010095376A CN111402260A CN 111402260 A CN111402260 A CN 111402260A CN 202010095376 A CN202010095376 A CN 202010095376A CN 111402260 A CN111402260 A CN 111402260A
Authority
CN
China
Prior art keywords
segmentation
data
network model
deep learning
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010095376.0A
Other languages
Chinese (zh)
Inventor
马杰超
张树
俞益洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202010095376.0A priority Critical patent/CN111402260A/en
Publication of CN111402260A publication Critical patent/CN111402260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a medical image segmentation method, a system, a terminal and a storage medium based on deep learning, which comprises the following steps: acquiring medical image data and preprocessing the medical image data; determining standard labeling data according to a labeling result of the data to be labeled by the expert; inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model; inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result; merging the predicted segmentation results on a plurality of 2D levels into a 3D segmentation region according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result; calculating an actual volume of the disorder according to the 3D segmentation result; the method and the device have the advantages that the deep learning technology is utilized, the outlines of two diseases on each layer of image in the CT are accurately segmented, the area on each layer of surface is accumulated, the accurate volume of a final focus is obtained, and the accuracy and the reliability of measuring the pleural effusion and the pneumatothorax volume are improved.

Description

Medical image segmentation method, system, terminal and storage medium based on deep learning
Technical Field
The present application relates to the field of medical image and computer-aided technology, and in particular, to a method, a system, a terminal, and a storage medium for medical image segmentation based on deep learning.
Background
The application of deep learning in the field of medical imaging is a research hotspot at present, and the application of deep learning in the field of medical imaging is more and more widely regarded in clinical and scientific research. The traditional image diagnosis is carried out by the clinician according to the subjective judgment of the experience, so the time consumption is long, the subjectivity is strong, the result may have difference, and the traditional image diagnosis becomes the bottleneck restricting the development of the modern medical imaging. With the development of medical technology and computer technology, more doctors use computer-aided technology to analyze and process the focus, for example, deep learning is used to quickly obtain the size, density and the like of the focus, so as to help the doctors to more easily obtain the focus and the interested region thereof, obtain more direct, more accurate and clear disease information, greatly improve the accuracy and reliability of diagnosis, and reduce the occurrence of medical disputes.
Pleural effusion and pneumothorax are common clinical signs, and the calculation of the amount of the two symptoms is also a problem which is often required to be solved in clinical practice, is directly related to the determination of a treatment scheme, and is helpful for judging the curative effect after the treatment of the pleural effusion. The calculation of pleural effusion and pneumothorax volume usually depends on imaging methods, and common X-ray and CT examination are often adopted as imaging means. The calculation of the amount of effusion and pneumothorax by X-ray examination is rough, and a small amount of effusion or pneumothorax is easy to miss diagnosis. However, doctors can judge some small amount of effusion and pneumothorax on CT, but can only roughly judge the effusion and pneumothorax as small amount, medium amount or large amount due to lack of quantitative indexes.
In the past decade, segmentation of pleural effusions and pneumothorax has been performed using traditional computer vision algorithms and pattern classification algorithms. Such methods are based on differences in target color and background, or strong edge responses of the target, etc. These methods based on artificially designing features or thresholds are generally not very universal. For different devices, the tube current tube voltage, the use of different scan doses, and the use of different window width levels all result in large images of the segmentation results. Moreover, the traditional measurement method is calculated on a CPU, and the reaction speed is many times slower than that of the deep learning technology based on the GPU. While segmentation methods based on deep learning have been proposed, they have not been used for the time being in fluid accumulation and pneumothorax.
Therefore, a medical image segmentation method based on deep learning is needed to realize rapid and accurate measurement of pleural effusion and thoracic volume.
Disclosure of Invention
In view of the above deficiencies of the prior art, the present application provides a method, a system, a terminal and a storage medium for medical image segmentation based on deep learning, wherein the method comprises the steps of performing classification judgment on each pixel point of a 2D layer image, merging the same classification points of adjacent pixels, and adding multiple layers of focus areas in an accumulation manner to obtain effusion volume or pneumothorax volume information of the focus.
In a first aspect, to solve the above technical problem, the present application provides a medical image segmentation method based on deep learning, including:
acquiring medical image data and preprocessing the medical image data;
determining standard labeling data according to a labeling result of the data to be labeled by the expert;
inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model;
inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
merging the predicted segmentation results on a plurality of 2D levels into a 3D segmentation region according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
calculating an actual volume of the disorder according to the 3D segmentation result;
the standard marking data is divided into training sample data and test sample data.
Optionally, the acquiring and preprocessing the medical image data includes:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
wherein the CT image data comprises basic illness state, disease course, diagnosis report according with international standard, pathology, image data and laboratory detection data.
Optionally, the determining standard annotation data according to the annotation result of the data to be annotated by the expert includes:
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is objection, other experts judge whether the labeling result is recorded into the database.
Optionally, the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model includes:
and acquiring continuous upper and lower layers of standard labeling data, combining the continuous upper and lower layers as the input of three channels of the segmentation network model, and training the segmentation network model.
Optionally, the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model includes:
and inputting training sample data to a target detection/segmentation model taking the characteristic pyramid network as a backbone network for training to obtain a trained segmentation network model.
Optionally, the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model includes:
and (4) scaling the training sample data into different sizes, inputting the different sizes into a preset deep learning network model for training, and obtaining a trained segmentation network model.
Optionally, the merging the segmentation results predicted on the multiple 2D layers into a 3D segmentation region according to whether the segmentation results belong to the same lesion area, and obtaining the 3D segmentation result through 3D segmentation region connection, includes:
and smoothing the predicted segmentation results on a plurality of 2D levels, and combining the segmentation results into a complete 3D segmentation result.
Optionally, the calculating an actual volume of the disease according to the 3D segmentation result includes:
according to the 3D segmentation result, the reverse derivation is carried out by calculating the parameters set when the shooting equipment is used, and the actual disease volume is calculated according to the proportion of the CT examination data between pixels and physical distances and the distance between layers.
Optionally, the method specifically includes:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is an objection, judging whether the labeling result is recorded into the database by other experts;
acquiring continuous upper and lower three layers of standard labeling data and combining the three layers as input of three channels of a segmentation network model, inputting training sample data in the standard labeling data to a target detection/segmentation model taking a characteristic pyramid network as a backbone network for training, and scaling the training sample data into different sizes and inputting the different sizes of the training sample data to the target detection/segmentation model taking the characteristic pyramid network as the backbone network for training to obtain a trained segmentation network model;
inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
smoothing the predicted segmentation results on a plurality of 2D levels, merging the segmentation results into a 3D segmentation region according to whether the segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
according to the 3D segmentation result, carrying out reverse derivation by calculating parameters set during shooting equipment, and calculating the actual disease volume according to the proportion of CT examination data between pixels and physical distances and the distance between layers;
the standard marking data is divided into training sample data and test sample data.
In a second aspect, the present invention further provides a deep learning-based medical image segmentation system, including:
the data acquisition unit is configured for acquiring medical image data and preprocessing the medical image data;
the data annotation unit is configured for determining standard annotation data according to an annotation result of the data to be annotated by the expert;
the model training unit is configured to input training sample data into a preset deep learning network model for training to obtain a trained segmentation network model;
the model prediction unit is configured to input each 2D layer data of the test sample data into the trained segmentation network model and predict a 2D layer segmentation result;
the layer merging unit is configured to merge the predicted segmentation results on the plurality of 2D layers into a 3D segmentation region according to whether the predicted segmentation results belong to the same lesion area, and the 3D segmentation results are obtained through the connection of the 3D segmentation regions;
a volume calculation unit configured to calculate an actual volume of the condition from the 3D segmentation result.
Optionally, the data acquisition unit is specifically configured to:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
wherein the CT image data comprises basic illness state, disease course, diagnosis report according with international standard, pathology, image data and laboratory detection data.
Optionally, the data labeling unit is specifically configured to:
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is objection, other experts judge whether the labeling result is recorded into the database.
Optionally, the model training unit is specifically configured to:
and acquiring continuous upper and lower layers of standard labeling data, combining the continuous upper and lower layers as the input of three channels of the segmentation network model, and training the segmentation network model.
Optionally, the model training unit is specifically configured to:
and inputting training sample data to a target detection/segmentation model taking the characteristic pyramid network as a backbone network for training to obtain a trained segmentation network model.
Optionally, the model training unit is specifically configured to:
and (4) scaling the training sample data into different sizes, inputting the different sizes into a preset deep learning network model for training, and obtaining a trained segmentation network model.
Optionally, the layer merging unit is specifically configured to:
and smoothing the predicted segmentation results on a plurality of 2D levels, and combining the segmentation results into a complete 3D segmentation result.
Optionally, the volume calculation unit is specifically configured to:
according to the 3D segmentation result, the reverse derivation is carried out by calculating the parameters set when the shooting equipment is used, and the actual disease volume is calculated according to the proportion of the CT examination data between pixels and physical distances and the distance between layers.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
Compared with the prior art, the method has the following beneficial effects:
the application provides that the deep learning technology is applied to the volume measurement of pleural effusion and pneumothorax for the first time, and the invention provides the universal measurement method which can be applied to different data acquisition equipment, different equipment scanning parameters and different image reconstruction algorithms. The method is characterized in that the segmentation of a center layer is assisted by introducing complementary information of continuous upper and lower layers, and the 2D layer segmentation result is predicted by respectively modeling a space dimension (2D layer) and a time sequence dimension (upper and lower layers). And for each layer of the obtained 2D prediction, innovatively combining the 2D prediction into a 3D segmentation result by using smoothing processing according to 3D up-down connectivity, accurately segmenting the outlines of two diseases on an image in CT, and obtaining the accurate volume of a final focus according to preset parameters of scanning equipment, thereby reducing the influence caused by subjective factors of doctors, improving the diagnosis rate, and improving the accuracy, reliability and measurement efficiency of measuring the pleural effusion and the pneumatosis volume.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a depth learning-based medical image segmentation provided in an embodiment of the present application.
Fig. 2 is a flowchart illustrating that an expert annotates data to be annotated according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of a deep learning-based medical image segmentation system according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a medical image segmentation terminal based on deep learning according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart illustrating a depth learning based medical image segmentation method according to an embodiment of the present application, the method including:
s101: acquiring medical image data and preprocessing the medical image data;
s102: determining standard labeling data according to a labeling result of the data to be labeled by the expert;
s103: inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model;
s104: inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
s105: merging the predicted segmentation results on a plurality of 2D levels into a 3D segmentation region according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
s106: calculating an actual volume of the disorder according to the 3D segmentation result;
the standard marking data is divided into training sample data and test sample data.
Based on the above embodiment, as a preferred embodiment, S101 acquires medical image data and performs preprocessing, including:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
wherein, the examination data comprises basic illness state, disease course, diagnosis report according with international standard, pathology, image data and laboratory detection data.
It should be noted that, ideally, the data should include the following information: basic illness state, course of disease, diagnostic reports meeting international standards, pathology, image data and laboratory test data. The data acquisition should refer to local epidemiological and statistical requirements, and the standardized acquisition should be performed on the image data of the imaging technology, the data should conform to the DICOM standard, the primary DICOM label information needs to be complete, the voxel images need to be continuous and complete, and the image shooting and scanning data need to conform to clinical specifications.
In addition, acquired data needs to be desensitized, and some information which is sensitive to patients needs to be deleted, namely, relevant requirements, regulations and standards of national and medical industry governing departments for protecting user privacy data in the medical and health field are met.
Based on the above embodiment, as a preferred embodiment, the step S102 of determining standard annotation data according to an annotation result of the expert to-be-annotated data includes:
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is objection, other experts judge whether the labeling result is recorded into the database.
Specifically, as shown in fig. 2, fig. 2 is a flowchart illustrating that an expert annotates data to be annotated according to an embodiment of the present application. And acquiring CT image data to be labeled, and labeling the same set of data by an expert team, namely, drawing the boundary of a target sign (pneumothorax and pleural effusion) as a labeling standard on a certain observation section such as an axial position, a coronal position or a sagittal position by an expert. In addition, the quality of the labeling results needs to be checked, that is, the labeling results of multiple experts are compared, the data of the non-objectional labeling results are recorded into the database, and the data of the objectional labeling results need to be judged by another arbitration expert or an expert with higher annual capital as standard labeling data.
Based on the above embodiment, as a preferred embodiment, S103 inputs training sample data to a preset deep learning network model for training, so as to obtain a trained segmented network model, including:
and acquiring continuous upper and lower layers of standard labeling data, combining the continuous upper and lower layers as the input of three channels of the segmentation network model, and training the segmentation network model.
It should be noted that, three continuous upper and lower layers of standard annotation data are obtained, 3 layers are used as input of the segmentation network model to form a pseudo 3D structure (2.5D), and the output of the segmentation network model is a 2D image with only one layer, so the other two input layers play an auxiliary role. The continuous upper and lower three layers are combined to be used as the input of three channels of the model, not only the pre-trained network parameters can be used, but also the mutual correlation among a plurality of layer CT images is considered, and for the continuous three-layer input images, the target of network learning is the labeling information of the middle layer.
Based on the above embodiment, as a preferred embodiment, S103 inputs training sample data to a preset deep learning network model for training, so as to obtain a trained segmented network model, including:
and inputting training sample data into a target detection/segmentation model taking a feature pyramid network (ResNet50+ FPN) as a backbone network for training to obtain a trained segmentation network model.
It should be noted that, because the size difference between pleural effusion and pneumothorax on chest CT is large, many networks do not have a good segmentation effect on all sizes. Therefore, the FPN can well find out smaller signs to improve the effect of the model by combining the obtained characteristics of different scales of a plurality of layers in a pyramid structure mode.
Based on the above embodiment, as a preferred embodiment, S103 inputs training sample data to a preset deep learning network model for training, so as to obtain a trained segmented network model, including:
and (4) scaling the training sample data into different sizes, inputting the different sizes into a preset deep learning network model for training, and obtaining a trained segmentation network model.
Specifically, the standard annotation data is input by using different scales, and the original image of the standard annotation data is scaled into 2D layer images with different sizes, such as (600 ), (800, 800), (1200 ) and the like, and is input into the segmentation network model, so that the segmentation effect of the segmentation network model on the pleural effusion and pneumothorax with different sizes is improved. Here, the original image of the standard label data is 2.5D data of a pseudo 3D drawing, 3 sheets 600 × 600 or 800 × 800, 1200 × 1200.
Based on the above embodiment, as a preferred embodiment, the step S105 of merging the segmentation results predicted on the 2D slice into 3D segmentation regions according to whether the segmentation results belong to the same lesion region, and obtaining the 3D segmentation results through 3D segmentation region connection includes:
and smoothing the predicted segmentation results on a plurality of 2D levels, and combining the segmentation results into a complete 3D segmentation result.
Specifically, each pixel point of the 2D layer image is subjected to focus category judgment through a segmentation network model, then points with the same focus category of adjacent pixels are combined, and multiple layers of focus areas are added in an accumulation mode to obtain a 3D segmentation result, namely effusion volume or pneumothorax volume information of the focus.
It should be noted that, in all existing CT segmentation schemes, 3D patches of interest are directly used as input, and modeling is directly performed to obtain 3D masks through segmentation. Unlike existing 3D object segmentation, the present application does not require the provision of a 3D ROI area in advance, i.e. the input to the segmentation unit is not 3D patch. The method and the device predict the segmentation result directly on each layer (2D slice) of the CT image. For convenience of display, the segmentation results predicted on different 2D levels are combined into a 3D region according to whether the segmentation results belong to the same lesion region or according to the degree of correlation. For each 2D segmentation result obtained by each layer, considering the class connection among the layers, according to the 3D connectivity, the 2D segmentation results on a plurality of layers are smoothed and merged into a complete 3D segmentation result, and a better segmentation result based on the lesion level can be obtained.
Based on the above embodiment, as a preferred embodiment, the step S106 of calculating an actual volume of the disease from the 3D segmentation result includes:
according to the 3D segmentation result, the reverse derivation is carried out by calculating the parameters set when the shooting equipment is used, and the actual disease volume is calculated according to the proportion of the CT examination data between pixels and physical distances and the distance between layers.
Specifically, the medical image segmentation method based on deep learning specifically includes the following steps:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is an objection, judging whether the labeling result is recorded into the database by other experts;
acquiring continuous upper and lower three layers of standard labeling data and combining the three layers as input of three channels of a segmentation network model, inputting training sample data in the standard labeling data to a target detection/segmentation model taking a characteristic pyramid network as a backbone network for training, and scaling the training sample data into different sizes and inputting the different sizes of the training sample data to the target detection/segmentation model taking the characteristic pyramid network as the backbone network for training to obtain a trained segmentation network model;
inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
smoothing the predicted segmentation results on a plurality of 2D levels, merging the segmentation results into a 3D segmentation region according to whether the segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
according to the 3D segmentation result, carrying out reverse derivation by calculating parameters set during shooting equipment, and calculating the actual disease volume according to the proportion of CT examination data between pixels and physical distances and the distance between layers;
the standard marking data is divided into training sample data and test sample data.
In addition, the method is also used for 3D medical image detection segmentation of MRI, TOMO and the like besides CT.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a medical image segmentation system based on deep learning according to an embodiment of the present application, where the system 300 includes:
a data acquisition unit 301 configured to acquire medical image data and perform preprocessing;
the data annotation unit 302 is configured to determine standard annotation data according to an annotation result of the data to be annotated by the expert;
the model training unit 303 is configured to input training sample data to a preset deep learning network model for training to obtain a trained segmentation network model;
a model prediction unit 304 configured to input each 2D layer data of the test sample data into the trained segmentation network model, and predict a 2D layer segmentation result;
a slice merging unit 305 configured to merge the segmentation results predicted on the plurality of 2D slices into 3D segmentation regions according to whether the segmentation results belong to the same lesion area, and obtain 3D segmentation results by connecting the 3D segmentation regions;
a volume calculation unit 306 configured to calculate an actual volume of the condition from the 3D segmentation result.
Based on the above embodiment, as a preferred embodiment, the data acquisition unit 301 is specifically configured to:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
wherein the CT image data comprises basic illness state, disease course, diagnosis report according with international standard, pathology, image data and laboratory detection data.
Based on the foregoing embodiment, as a preferred embodiment, the data labeling unit 302 is specifically configured to:
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is objection, other experts judge whether the labeling result is recorded into the database.
Based on the foregoing embodiment, as a preferred embodiment, the model training unit 303 is specifically configured to:
and acquiring continuous upper and lower layers of standard labeling data, combining the continuous upper and lower layers as the input of three channels of the segmentation network model, and training the segmentation network model.
Based on the foregoing embodiment, as a preferred embodiment, the model training unit 303 is specifically configured to:
and inputting training sample data to a target detection/segmentation model taking the characteristic pyramid network as a backbone network for training to obtain a trained segmentation network model.
Based on the foregoing embodiment, as a preferred embodiment, the model training unit 303 is specifically configured to:
and (4) scaling the training sample data into different sizes, inputting the different sizes into a preset deep learning network model for training, and obtaining a trained segmentation network model.
Based on the foregoing embodiment, as a preferred embodiment, the layer merging unit 305 is specifically configured to:
and smoothing the predicted segmentation results on a plurality of 2D levels, and combining the segmentation results into a complete 3D segmentation result.
Based on the above embodiment, as a preferred embodiment, the volume calculating unit 306 is specifically configured to:
according to the 3D segmentation result, the reverse derivation is carried out by calculating the parameters set when the shooting equipment is used, and the actual disease volume is calculated according to the proportion of the CT examination data between pixels and physical distances and the distance between layers.
Fig. 4 is a schematic structural diagram of a terminal system 400 according to an embodiment of the present invention, where the terminal system 400 may be used to perform the deep learning based medical image segmentation according to the embodiment of the present invention.
The terminal system 400 may include: a processor 410, a memory 420, and a communication unit 430. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 420 may be used for storing instructions executed by the processor 410, and the memory 420 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in memory 420, when executed by processor 410, enable terminal 400 to perform some or all of the steps in the method embodiments described below.
The processor 410 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 420 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, the processor 410 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 430, configured to establish a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the method and the device utilize the deep learning technology, predict the 2D level segmentation result through the segmentation network model, smoothly segment the result according to the 3D connectivity and combine the segmentation result into the 3D segmentation result, accurately segment the outlines of two diseases on each layer of image in the CT, and obtain the accurate volume of the final focus by accumulating the area on each level, thereby reducing the influence caused by the subjective factors of doctors, improving the diagnosis rate, and improving the accuracy, reliability and measurement efficiency of pleural effusion and thoracic volume measurement.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. A medical image segmentation method based on deep learning is characterized by comprising the following steps:
acquiring medical image data and preprocessing the medical image data;
determining standard labeling data according to a labeling result of the data to be labeled by the expert;
inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model;
inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
merging the predicted segmentation results on a plurality of 2D levels into a 3D segmentation region according to whether the predicted segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
calculating an actual volume of the disorder according to the 3D segmentation result;
the standard marking data is divided into training sample data and test sample data.
2. The deep learning-based medical image segmentation method according to claim 1, wherein the acquiring and preprocessing the medical image data comprises:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
wherein the CT image data comprises basic illness state, disease course, diagnosis report according with international standard, pathology, image data and laboratory detection data.
3. The deep learning-based medical image segmentation method according to claim 1, wherein the determining standard labeling data according to the labeling result of the expert to-be-labeled data comprises:
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is objection, other experts judge whether the labeling result is recorded into the database.
4. The deep learning-based medical image segmentation method according to claim 1, wherein the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model comprises:
and acquiring continuous upper and lower layers of standard labeling data, combining the continuous upper and lower layers as the input of three channels of the segmentation network model, and training the segmentation network model.
5. The deep learning-based medical image segmentation method according to claim 1, wherein the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model comprises:
and inputting training sample data to a target detection/segmentation model taking the characteristic pyramid network as a backbone network for training to obtain a trained segmentation network model.
6. The deep learning-based medical image segmentation method according to claim 1, wherein the inputting training sample data into a preset deep learning network model for training to obtain a trained segmentation network model comprises:
and (4) scaling the training sample data into different sizes, inputting the different sizes into a preset deep learning network model for training, and obtaining a trained segmentation network model.
7. The method for deep learning based medical image segmentation according to claim 1, wherein the merging of the predicted segmentation results on multiple 2D levels into 3D segmentation regions according to whether the prediction results belong to the same lesion region, and obtaining the 3D segmentation results through 3D segmentation region connection comprises:
and smoothing the predicted segmentation results on a plurality of 2D levels, and combining the segmentation results into a complete 3D segmentation result.
8. The deep learning-based medical image segmentation method according to claim 1, wherein the calculating an actual disease volume according to the 3D segmentation result comprises:
according to the 3D segmentation result, the reverse derivation is carried out by calculating the parameters set when the shooting equipment is used, and the actual disease volume is calculated according to the proportion of the CT examination data between pixels and physical distances and the distance between layers.
9. The method for medical image segmentation based on deep learning according to claim 1, specifically comprising:
carrying out standardized acquisition on CT image data;
carrying out data desensitization processing on the CT image data;
acquiring the labeling results of a preset number of experts on the same data to be labeled;
comparing the labeling results, and judging whether the labeling results have objections;
if no objection exists, recording the labeling result into a database;
if there is an objection, judging whether the labeling result is recorded into the database by other experts;
acquiring continuous upper and lower three layers of standard labeling data and combining the three layers as input of three channels of a segmentation network model, inputting training sample data in the standard labeling data to a target detection/segmentation model taking a characteristic pyramid network as a backbone network for training, and scaling the training sample data into different sizes and inputting the different sizes of the training sample data to the target detection/segmentation model taking the characteristic pyramid network as the backbone network for training to obtain a trained segmentation network model;
inputting each 2D layer data of the test sample data into a trained segmentation network model, and predicting a 2D layer segmentation result;
smoothing the predicted segmentation results on a plurality of 2D levels, merging the segmentation results into a 3D segmentation region according to whether the segmentation results belong to the same focus region, and connecting the 3D segmentation regions to obtain a 3D segmentation result;
according to the 3D segmentation result, carrying out reverse derivation by calculating parameters set during shooting equipment, and calculating the actual disease volume according to the proportion of CT examination data between pixels and physical distances and the distance between layers;
the standard marking data is divided into training sample data and test sample data.
10. A system for medical image segmentation based on deep learning, comprising:
the data acquisition unit is configured for acquiring medical image data and preprocessing the medical image data;
the data annotation unit is configured for determining standard annotation data according to an annotation result of the data to be annotated by the expert;
the model training unit is configured to input training sample data into a preset deep learning network model for training to obtain a trained segmentation network model;
the model prediction unit is configured to input each 2D layer data of the test sample data into the trained segmentation network model and predict a 2D layer segmentation result;
the layer merging unit is configured to merge the predicted segmentation results on the plurality of 2D layers into a 3D segmentation region according to whether the predicted segmentation results belong to the same lesion area, and the 3D segmentation results are obtained through the connection of the 3D segmentation regions;
a volume calculation unit configured to calculate an actual volume of the condition from the 3D segmentation result.
11. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-9.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-9.
CN202010095376.0A 2020-02-17 2020-02-17 Medical image segmentation method, system, terminal and storage medium based on deep learning Pending CN111402260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010095376.0A CN111402260A (en) 2020-02-17 2020-02-17 Medical image segmentation method, system, terminal and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010095376.0A CN111402260A (en) 2020-02-17 2020-02-17 Medical image segmentation method, system, terminal and storage medium based on deep learning

Publications (1)

Publication Number Publication Date
CN111402260A true CN111402260A (en) 2020-07-10

Family

ID=71428468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010095376.0A Pending CN111402260A (en) 2020-02-17 2020-02-17 Medical image segmentation method, system, terminal and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN111402260A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783475A (en) * 2020-07-28 2020-10-16 北京深睿博联科技有限责任公司 A Semantic Visual Localization Method and Device Based on Phrase Relation Propagation
CN111815608A (en) * 2020-07-13 2020-10-23 北京小白世纪网络科技有限公司 New coronary pneumonia patient recovery time prediction method and system based on deep learning
CN112348774A (en) * 2020-09-29 2021-02-09 深圳市罗湖区人民医院 CT image segmentation method, terminal and storage medium suitable for bladder cancer
CN112435212A (en) * 2020-10-15 2021-03-02 杭州脉流科技有限公司 Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium
CN112435213A (en) * 2020-10-21 2021-03-02 深圳大学 Head and neck structure image segmentation and classification method and system
KR102226743B1 (en) * 2020-09-15 2021-03-12 주식회사 딥노이드 Apparatus for quantitatively measuring pneumothorax in chest radiographic images based on a learning model and method therefor
CN112712508A (en) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 Method and device for determining pneumothorax
CN112801940A (en) * 2020-12-31 2021-05-14 深圳市联影高端医疗装备创新研究院 Model evaluation method, device, equipment and medium
CN112869758A (en) * 2020-12-31 2021-06-01 杭州依图医疗技术有限公司 Method and device for determining pleural effusion
CN113081052A (en) * 2021-03-31 2021-07-09 陕西省肿瘤医院 Processing method of volume data of ultrasonic scanning target
CN113096093A (en) * 2021-04-12 2021-07-09 中山大学 Method, system and device for calculating quantity and volume of calculi in CT (computed tomography) image
CN113240661A (en) * 2021-05-31 2021-08-10 平安科技(深圳)有限公司 Deep learning-based lumbar vertebra analysis method, device, equipment and storage medium
CN113469229A (en) * 2021-06-18 2021-10-01 中山大学孙逸仙纪念医院 Method and device for automatically labeling breast cancer focus based on deep learning
CN113707312A (en) * 2021-09-16 2021-11-26 人工智能与数字经济广东省实验室(广州) Blood vessel quantitative identification method and device based on deep learning
CN114004970A (en) * 2021-11-09 2022-02-01 粟海信息科技(苏州)有限公司 Tooth area detection method, device, equipment and storage medium
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner
CN115222675A (en) * 2022-07-01 2022-10-21 北京深睿博联科技有限责任公司 Hysteromyoma automatic typing method and device based on deep learning
CN115797302A (en) * 2022-12-06 2023-03-14 华科精准(北京)医疗科技有限公司 Blood vessel segmentation method and device based on deep learning model
CN116596912A (en) * 2023-06-02 2023-08-15 湖南大学 Method, device and computer equipment for dimension measurement of abnormal structures in medical images
CN120047473A (en) * 2025-04-24 2025-05-27 复旦大学附属眼耳鼻喉科医院 Subretinal effusion region segmentation and volume calculation method, system, product and terminal
US12322111B2 (en) 2020-12-30 2025-06-03 United Imaging Research Institute of Innovative Medical Equipment Image segmentation method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108536662A (en) * 2018-04-16 2018-09-14 苏州大学 A kind of data mask method and device
WO2018222755A1 (en) * 2017-05-30 2018-12-06 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN109886179A (en) * 2019-02-18 2019-06-14 深圳视见医疗科技有限公司 The image partition method and system of cervical cell smear based on Mask-RCNN
CN110047128A (en) * 2018-01-15 2019-07-23 西门子保健有限责任公司 The method and system of X ray CT volume and segmentation mask is rebuild from several X-ray radiogram 3D
US20190251694A1 (en) * 2018-02-14 2019-08-15 Elekta, Inc. Atlas-based segmentation using deep-learning
CN110310281A (en) * 2019-07-10 2019-10-08 重庆邮电大学 A method for detection and segmentation of lung nodules in virtual medicine based on Mask-RCNN deep learning
CN110782446A (en) * 2019-10-25 2020-02-11 杭州依图医疗技术有限公司 A method and device for determining the volume of a pulmonary nodule

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018222755A1 (en) * 2017-05-30 2018-12-06 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
CN110047128A (en) * 2018-01-15 2019-07-23 西门子保健有限责任公司 The method and system of X ray CT volume and segmentation mask is rebuild from several X-ray radiogram 3D
US20190251694A1 (en) * 2018-02-14 2019-08-15 Elekta, Inc. Atlas-based segmentation using deep-learning
CN108536662A (en) * 2018-04-16 2018-09-14 苏州大学 A kind of data mask method and device
CN109886179A (en) * 2019-02-18 2019-06-14 深圳视见医疗科技有限公司 The image partition method and system of cervical cell smear based on Mask-RCNN
CN110310281A (en) * 2019-07-10 2019-10-08 重庆邮电大学 A method for detection and segmentation of lung nodules in virtual medicine based on Mask-RCNN deep learning
CN110782446A (en) * 2019-10-25 2020-02-11 杭州依图医疗技术有限公司 A method and device for determining the volume of a pulmonary nodule

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815608B (en) * 2020-07-13 2023-08-25 北京小白世纪网络科技有限公司 New coronatine pneumonia patient rehabilitation time prediction method and system based on deep learning
CN111815608A (en) * 2020-07-13 2020-10-23 北京小白世纪网络科技有限公司 New coronary pneumonia patient recovery time prediction method and system based on deep learning
CN111783475A (en) * 2020-07-28 2020-10-16 北京深睿博联科技有限责任公司 A Semantic Visual Localization Method and Device Based on Phrase Relation Propagation
KR102226743B1 (en) * 2020-09-15 2021-03-12 주식회사 딥노이드 Apparatus for quantitatively measuring pneumothorax in chest radiographic images based on a learning model and method therefor
CN112348774A (en) * 2020-09-29 2021-02-09 深圳市罗湖区人民医院 CT image segmentation method, terminal and storage medium suitable for bladder cancer
CN112348774B (en) * 2020-09-29 2025-01-03 深圳市罗湖区人民医院 A CT image segmentation method, terminal and storage medium suitable for bladder cancer
CN112435212A (en) * 2020-10-15 2021-03-02 杭州脉流科技有限公司 Brain focus region volume obtaining method and device based on deep learning, computer equipment and storage medium
CN112435213A (en) * 2020-10-21 2021-03-02 深圳大学 Head and neck structure image segmentation and classification method and system
CN112435213B (en) * 2020-10-21 2023-09-29 深圳大学 Head and neck structure image segmentation and classification method and system
US12322111B2 (en) 2020-12-30 2025-06-03 United Imaging Research Institute of Innovative Medical Equipment Image segmentation method, device, equipment and storage medium
CN112801940A (en) * 2020-12-31 2021-05-14 深圳市联影高端医疗装备创新研究院 Model evaluation method, device, equipment and medium
CN112712508A (en) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 Method and device for determining pneumothorax
CN112869758A (en) * 2020-12-31 2021-06-01 杭州依图医疗技术有限公司 Method and device for determining pleural effusion
CN112712508B (en) * 2020-12-31 2024-05-14 杭州依图医疗技术有限公司 Pneumothorax determination method and pneumothorax determination device
CN113081052A (en) * 2021-03-31 2021-07-09 陕西省肿瘤医院 Processing method of volume data of ultrasonic scanning target
CN113096093A (en) * 2021-04-12 2021-07-09 中山大学 Method, system and device for calculating quantity and volume of calculi in CT (computed tomography) image
CN113240661A (en) * 2021-05-31 2021-08-10 平安科技(深圳)有限公司 Deep learning-based lumbar vertebra analysis method, device, equipment and storage medium
CN113240661B (en) * 2021-05-31 2023-09-26 平安科技(深圳)有限公司 Deep learning-based lumbar vertebra bone analysis method, device, equipment and storage medium
CN113469229A (en) * 2021-06-18 2021-10-01 中山大学孙逸仙纪念医院 Method and device for automatically labeling breast cancer focus based on deep learning
CN113707312A (en) * 2021-09-16 2021-11-26 人工智能与数字经济广东省实验室(广州) Blood vessel quantitative identification method and device based on deep learning
CN114004970A (en) * 2021-11-09 2022-02-01 粟海信息科技(苏州)有限公司 Tooth area detection method, device, equipment and storage medium
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device
CN114820584B (en) * 2022-05-27 2023-02-21 北京安德医智科技有限公司 Lung focus localization device
CN114820584A (en) * 2022-05-27 2022-07-29 北京安德医智科技有限公司 Lung focus positioner
CN115222675A (en) * 2022-07-01 2022-10-21 北京深睿博联科技有限责任公司 Hysteromyoma automatic typing method and device based on deep learning
CN115797302A (en) * 2022-12-06 2023-03-14 华科精准(北京)医疗科技有限公司 Blood vessel segmentation method and device based on deep learning model
CN116596912A (en) * 2023-06-02 2023-08-15 湖南大学 Method, device and computer equipment for dimension measurement of abnormal structures in medical images
CN120047473A (en) * 2025-04-24 2025-05-27 复旦大学附属眼耳鼻喉科医院 Subretinal effusion region segmentation and volume calculation method, system, product and terminal
CN120047473B (en) * 2025-04-24 2025-08-05 复旦大学附属眼耳鼻喉科医院 Subretinal fluid area segmentation and volume calculation method, system, product and terminal

Similar Documents

Publication Publication Date Title
CN111402260A (en) Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111047591A (en) Focal volume measuring method, system, terminal and storage medium based on deep learning
EP3021753B1 (en) Systems and methods for determining hepatic function from liver scans
CN111862044A (en) Ultrasound image processing method, apparatus, computer equipment and storage medium
CN115131300B (en) Intelligent three-dimensional diagnosis method and system for osteoarthritis based on deep learning
US11471096B2 (en) Automatic computerized joint segmentation and inflammation quantification in MRI
Brugnara et al. Automated volumetric assessment with artificial neural networks might enable a more accurate assessment of disease burden in patients with multiple sclerosis
CN111602173B (en) Brain tomography data analysis method
CN111445449A (en) Classification method, apparatus, computer equipment and storage medium of region of interest
US10878564B2 (en) Systems and methods for processing 3D anatomical volumes based on localization of 2D slices thereof
CN111080584A (en) Quality control method, computer device and readable storage medium for medical images
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
CN110717905A (en) Brain image detection method, computer equipment and storage medium
CN110969623B (en) Lung CT multi-symptom automatic detection method, system, terminal and storage medium
Dan et al. DeepGA for automatically estimating fetal gestational age through ultrasound imaging
JP2025128132A (en) Weakly supervised lesion segmentation
EP4167184B1 (en) Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
CN114881976B (en) Method, device and computer equipment for determining standard cross-section images of femur and humerus
Kim et al. A deep learning approach for automated segmentation of kidneys and exophytic cysts in individuals with autosomal dominant polycystic kidney disease
Ghomi et al. Segmentation of COVID-19 pneumonia lesions: A deep learning approach
CN111681205A (en) Image analysis method, computer equipment and storage medium
Barbosa et al. Towards automatic quantification of the epicardial fat in non-contrasted CT images
US9436889B2 (en) Image processing device, method, and program
Yin et al. Ultrasonographic segmentation of fetal lung with deep learning
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710