CN111028206A - Prostate cancer automatic detection and classification system based on deep learning - Google Patents
Prostate cancer automatic detection and classification system based on deep learning Download PDFInfo
- Publication number
- CN111028206A CN111028206A CN201911146224.2A CN201911146224A CN111028206A CN 111028206 A CN111028206 A CN 111028206A CN 201911146224 A CN201911146224 A CN 201911146224A CN 111028206 A CN111028206 A CN 111028206A
- Authority
- CN
- China
- Prior art keywords
- image
- prostate
- classification
- images
- dwi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 206010060862 Prostate cancer Diseases 0.000 title claims abstract description 41
- 208000000236 Prostatic Neoplasms Diseases 0.000 title claims abstract description 41
- 238000013135 deep learning Methods 0.000 title claims abstract description 24
- 230000011218 segmentation Effects 0.000 claims abstract description 73
- 210000002307 prostate Anatomy 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims abstract description 48
- 230000002159 abnormal effect Effects 0.000 claims abstract description 37
- 230000003902 lesion Effects 0.000 claims abstract description 22
- 238000003745 diagnosis Methods 0.000 claims abstract description 12
- 238000010801 machine learning Methods 0.000 claims abstract description 5
- 210000000056 organ Anatomy 0.000 claims description 27
- 238000002372 labelling Methods 0.000 claims description 21
- 238000010606 normalization Methods 0.000 claims description 19
- 238000009792 diffusion process Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 238000012805 post-processing Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 230000003211 malignant effect Effects 0.000 claims description 9
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 230000002093 peripheral effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 238000013480 data collection Methods 0.000 claims 1
- 238000013145 classification model Methods 0.000 abstract description 4
- 238000012790 confirmation Methods 0.000 abstract description 2
- 238000002597 diffusion-weighted imaging Methods 0.000 description 38
- 238000002595 magnetic resonance imaging Methods 0.000 description 35
- 239000013598 vector Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012952 Resampling Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 210000001519 tissue Anatomy 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000036210 malignancy Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000000586 desensitisation Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 201000005202 lung cancer Diseases 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30081—Prostate
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Epidemiology (AREA)
- Artificial Intelligence (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Life Sciences & Earth Sciences (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a deep learning-based automatic prostate cancer detection and classification system which is characterized by comprising an automatic detection module for realizing automatic detection of a prostate lesion region based on a deep learning method and an automatic classification module for realizing automatic classification of the prostate lesion region detected by the automatic detection module by utilizing a machine learning or classification network. Different from the traditional prostate cancer classification method, the whole MRI region or prostate organ is used as input data of a classification network, the abnormal region segmentation is used for further limiting the input of the classification network, the false positive rate of a classification model is reduced through the method, the classification precision of the model is improved, meanwhile, the detected focus region is output by combining the classification result, a doctor is assisted in automatic confirmation and diagnosis of prostate cancer, and the working efficiency of the doctor is improved.
Description
Technical Field
The invention relates to a prostate cancer automatic detection and classification system based on deep learning, and belongs to the technical field of image processing and medicine.
Background
Prostate cancer is the most common male malignancy, the incidence rate is the first male malignancy in European and American countries, and the mortality rate is second to lung cancer. The incidence of prostate in China is obviously lower than that in Europe and America, but the prostate incidence is gradually increased in recent years, especially in the elderly population. Early stage prostate cancer can be effectively treated and controlled, and the diagnosis of early stage prostate cancer can effectively reduce the death rate. Therefore, accurate diagnosis of prostate cancer can greatly improve the early detection rate of prostate cancer.
Magnetic Resonance Imaging (MRI) examination has become an important method for diagnosing prostate cancer with the advantages of non-invasive examination, diversified scanning images, clear soft tissue structures and the like. MRI Imaging includes T2Weighted Imaging (T2Weighted Imaging, T2WI), Diffusion Weighted Imaging (DWI), surface Diffusion Coefficient mapping (ADC). Wherein the T2WI image provides clearer tissue contrast, the DWI image provides the limited degree of water molecule diffusion in the tissue, and the b value in the DWI image is a diffusion sensitive coefficient and is an important parameter in DWI. The ADC map is obtained by post-calculation processing of DWI maps of different b values. The images of the three sequences can reflect the characteristics of the prostate cancer focus to a certain extent, and have important significance for judging the prostate cancer.
In recent years, as the incidence rate of prostate cancer increases, the number of MRI images also increases, MRI images have the characteristics of multiple scanning sequences, complex structure and the like, a doctor needs to combine multiple sequence images for diagnosis, the process is time-consuming and labor-consuming, and the diagnosis result is greatly influenced by different experience levels of the doctor. The deep learning method based on the Convolutional Neural Network (CNN) is widely applied to the field of computer vision, and can realize tasks such as high-precision image classification, image segmentation, target detection and the like. Therefore, the detection and classification of the prostate cancer by combining the CNN can effectively reduce the workload of doctors and greatly improve the working efficiency of the doctors.
The existing prostate cancer diagnosis method based on deep learning mainly comprises two types, one type is that the whole MRI image is sent into a CNN to obtain the good and malignant two-classification result of the whole MRI image, the method only realizes the qualitative diagnosis of a case without accurately positioning the position of a focus, and meanwhile, the accuracy is not ideal due to the excessive input redundant information; the other method is to obtain the segmentation result of the focus by using a CNN model of an encoder-decoder and realize the good and malignant classification of the cases by directly judging whether the focus is segmented or not, and the mode has the defect of high false positive rate and cannot confirm whether the segmentation result of the focus meets the condition that the focus is in the organ range of the prostate. In addition, some current diagnostic methods only utilize T2WI images, and ignore the characteristic of limited water molecule diffusion degree of other sequences such as ADC and DWI images, and the characteristic has important value for distinguishing prostate malignant tumors. Therefore, there is a need for a multi-parameter MRI prostate cancer automatic detection and classification method that integrates prostate organ information, lesion information, and case classification.
Disclosure of Invention
The purpose of the invention is: the lesion region of the prostate cancer is automatically detected and classified by using the multi-parameter MRI image, so that a doctor can accurately and rapidly diagnose the prostate cancer and accurately locate the lesion region.
In order to achieve the above object, the present invention provides an automatic prostate cancer detection and classification system based on deep learning, which is characterized in that the system comprises an automatic detection module for automatically detecting a prostate lesion region based on a deep learning method and an automatic classification module for automatically classifying the prostate lesion region detected by the automatic detection module by using a machine learning or classification network, wherein:
the automatic detection module includes:
the data acquisition unit is used for acquiring multi-parameter MRI image data containing a normal sample and a lesion sample, wherein the multi-parameter MRI image data comprises a T2WI image, an ADC image and a DWI image;
the marking unit is used for marking original data aiming at the characteristics of the multi-parameter MRI image, and generating a mask image as a gold standard of model evaluation by utilizing an organ region of interest marked on the T2WI image and a focus region of interest marked on the ADC image and the DWI image;
the image database is used for storing original multi-parameter MRI image data and a labeling result obtained by the labeling unit;
the image registration unit is used for registering the ADC image and the DWI image onto the T2WI image, and ensuring that the organ gold standard marked on the T2WI image is matched with the ADC image and the DWI image;
the image standardization unit is used for carrying out standardization processing on multi-parameter MRI image data;
the image enhancement unit is used for enhancing multi-parameter MRI image data so as to solve the problems of unbalanced data samples and insufficient data samples and ensure the data quantity balance of cancerous samples and normal samples;
the prostate organ segmentation model is used for segmenting a prostate organ region from an image, the input of the prostate organ segmentation model is a T2WI image, the output dimension selects three dimensions of a central zone, a peripheral zone and a background of a prostate, a network of an encoder-decoder structure is selected as a segmentation network of the prostate organ segmentation model, the segmentation network output is converted into a probability by adopting softmax, and the classification of the maximum value of the three output probabilities of each pixel is selected as a pixel classification;
the result post-processing unit optimizes an organ segmentation result output by the prostate organ segmentation model and a prostate abnormal region segmentation result obtained by the prostate abnormal region segmentation unit by adopting a graphical method, and resamples the organ segmentation result obtained by the prostate organ segmentation model or the prostate abnormal region segmentation result to the original size of input data by adopting an interpolation method to obtain a final organ segmentation result or a final prostate abnormal region segmentation result;
a dividing unit for dividing abnormal prostate region to obtain b value of 0-2000 s/mm2Mapping the organ segmentation result to the ADC image and the DWI image which are registered by the image registration unit to obtain organ interested regions of all sequence images, and combining a plurality of images as the input of an abnormal region segmentation model by a splicing method, wherein the abnormal region segmentation model adopts a pre-trained encoder-decoder coding and decoding network to obtain a prostate abnormal region segmentation result;
the input data of the automatic classification module are DWI images, ADC images and T2WI images of one or more b values, the segmentation result of the abnormal prostate region obtained by the abnormal prostate region segmentation unit and the result post-processing unit is selected to be mapped into the ADC images and DWI images which are registered by the image registration unit for region-of-interest extraction, the images which are processed by the abnormal prostate region segmentation unit and the result post-processing unit and have no abnormal region are randomly cut in the range of the prostate to obtain the regions-of-interest with the same size, the images are jointly input into the feature extractor by a splicing method, the features extracted by the feature extractor are classified by the classifier into a good class and a bad class, the discrimination result of the good class and the bad class is obtained, and the cases with positive result are input into the abnormal prostate region segmentation unit and the result post-processing unit for region-of-interest extraction and are output as a focus detection result, assisting doctors to carry out positioning diagnosis of focus.
Preferably, the data acquired by the data acquisition unit is subjected to desensitization processing to delete personal information of the patient.
Preferably, the labeling unit performs prostate organ boundary labeling in the T2WI image, and the labeling unit performs lesion boundary labeling on the ADC image and DWI image. And simultaneously labeling benign and malignant classification labels by using multi-parameter MRI images.
Preferably, the image normalization unit performs image normalization, gray value normalization and spatial normalization on the MRI image, wherein the gray value normalization is used for normalizing the gray values of the acquired MRI image to a uniform range; spatial normalization is used to resample MRI images to a uniform size.
Preferably, the prostate abnormal region segmentation unit acquires the DWI image of the corresponding b value directly from the scanning device or calculates the DWI image of the corresponding b value from the DWI image without diffusion weighting, wherein the b value of the DWI image without diffusion weighting is 0s/mm2The calculation formula is as follows: s (b) ═ S (0) e-b×ADCIn the formula, S (0) represents a DWI image without diffusion weighting, S (b) represents a DWI image with a corresponding b value estimated, and ADC represents an apparent diffusion coefficient.
Preferably, the feature extractor of the automatic classification module employs a deep convolutional neural network CNN.
Preferably, the classifier input of the automatic classification module is the feature obtained by fusing the features obtained by the neural network feature extractor and the features of the image group
Different from the traditional prostate cancer classification method, the whole MRI region or prostate organ is used as input data of a classification network, the abnormal region segmentation is used for further limiting the input of the classification network, the false positive rate of a classification model is reduced through the method, the classification precision of the model is improved, meanwhile, the detected focus region is output by combining the classification result, a doctor is assisted in automatic confirmation and diagnosis of prostate cancer, and the working efficiency of the doctor is improved.
Drawings
FIG. 1 is a general structural framework diagram of the deep learning-based automatic prostate cancer detection and classification method according to the present invention;
FIG. 2 is a schematic diagram of an image preprocessing method based on a deep learning prostate cancer automatic detection and classification method according to the present invention;
FIG. 3 is a schematic diagram of the prostate organ segmentation method based on the deep learning prostate cancer automatic detection and classification method of the present invention;
FIG. 4 is a schematic diagram of the method for automatically detecting and classifying prostate cancer based on deep learning according to the present invention;
FIG. 5 is a schematic diagram of an automatic prostate cancer detection method based on a deep learning automatic prostate cancer detection and classification method according to the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
The following specific method for automatically detecting and classifying prostate cancer is taken as an example to specifically describe the deep learning-based automatic prostate cancer detection and classification system provided by the invention, and the method comprises the following steps:
the acquisition and labeling of S1 data mainly comprises the following steps:
s1-1, data acquisition: multi-parameter MRI imaging data including normal and lesion samples are acquired, and specific acquired images include T2WI, ADC, and DWI images. All data is desensitized to remove patient personal information.
S1-2, notation: accurate data annotation is an important guarantee for the effectiveness of the algorithm model, and the accuracy of the algorithm model can be improved by carrying out original data annotation according to the characteristics of the multi-parameter MRI image. Organ boundaries are most evident in the T2WI image, so prostate organ boundary labeling was performed on the T2WI image. ADC and DWI are diagnostic images commonly used by clinicians, and therefore lesion boundary labeling is performed on ADC and DWI images. And simultaneously labeling benign and malignant classification labels by using multi-parameter MRI images.
The image annotation can be done by a computer or manually, and can include at least two experts at the same level or different levels. When the labeling result of the experts is different, the experts in the same level discuss or the experts in a higher level correct the labeling result. The labeled organ region of interest (hereinafter referred to as organ ROI) and lesion region of interest (hereinafter referred to as lesion ROI) generate mask maps as gold criteria for model evaluation.
S1-3, generating an image database: the original multi-parameter MRI image data and the labeling result together form a prostate cancer image database.
S2, preprocessing the image data, the flow of which is shown in fig. 2, and the method mainly includes three steps of image registration, image normalization and image enhancement:
s2-1, image registration: in clinical diagnosis, doctors can comprehensively diagnose by combining image information of different sequences, and the method provided by the invention uses the thought for reference. The problem of strict alignment of different images is solved when different images are quantitatively analyzed. The ADC image and the DWI image are registered to the T2WI image by using affine transformation or 3D Euler transformation and the like, and the organ gold standard marked on the T2WI image can be matched to the ADC image and the DWI image.
S2-2, image standardization: due to the complexity of the prostate MRI image, the original MRI image cannot be directly used for model training and prediction, and the MRI image needs to be standardized first. Specific normalization includes image normalization, gray value normalization, and spatial normalization. Wherein image normalization includes, but is not limited to, histogram equalization and outlier removal, gray value normalization normalizes the gray values of the acquired MRI images to a uniform range, such as [0,1] or [0,255 ]. And (4) spatial standardization, namely resampling the image to be of a uniform size, and resampling the image by adopting an interpolation method in order to avoid serious image distortion after standardization.
S2-3, image enhancement: data enhancement is mainly to solve the problems of data sample imbalance and data sample shortage. Because the MRI image data collected in the image database is clinical prostate data in a hospital, the data volume balance between a cancerous sample and a normal sample is difficult to ensure, and therefore, the data set samples are balanced in an oversampling or undersampling mode. In order to improve the generalization capability of the model and avoid overfitting of the model, the model training data set is expanded in a random transformation mode, and the specific method includes but is not limited to one or more of data processing methods such as random rotation, inversion, distortion, translation, clipping, noise addition and the like.
S3 prostate organ segmentation to obtain organ segmentation result, the method flow is as shown in fig. 3, the main steps include MRI image selection, network setting and segmentation result post-processing:
s3-1, MRI scanning image selection: the organ boundaries of the prostate of the T2WI image are most clear, the central and peripheral bands are well defined, and the expert label of the prostate organ is also on the T2WI image, among several sequence images of T2WI, ADC, DWI, so the T2WI image is selected as the input to the prostate organ segmentation model.
S3-2, setting of a segmentation network in the prostate organ segmentation model: in order to meet the target requirements of prostate organ segmentation, a network of encoder-decoder structures is selected as a segmentation network (including but not limited to UNet, SegNet, SegAN and other models), and the segmentation network is obtained through pre-training. The output dimension selects three dimensions of a central zone, a peripheral zone and a background of the prostate as output, the network output result is converted into probability by adopting softmax, and the classification of the maximum value of the three output probabilities of each pixel is selected as pixel classification.
S3-3, and the result post-processing: and optimizing the segmentation result output by the S3-2 by adopting a graphical method, resampling the segmentation result obtained by segmenting the network model to the original size of input data by adopting an interpolation method to obtain a final organ segmentation result, wherein the specific graphical optimization method comprises noise point removal, cavity filling and boundary smoothing.
S4 prostate abnormal region segmentation, the method flow is shown in fig. 4, the main steps include MRI image selection, organ ROI region extraction, network setup and result post-processing:
s4-1, MRI image selection: ADC and DWI images are the main basis for doctors to diagnose and search for focus in clinical diagnosis. The selection of the b value in the DWI has a large influence on the result, the high b value can improve the contrast of tumor lesion and benign tissue, but the problems of image distortion increase and the like can be caused along with the increase of the b value, the DWI images with multiple b values can not only highlight a lesion area, but also keep a high signal-to-noise ratio, and especially the plurality of DWI images with large b value coverage range can fully consider the influence of low, medium and high b values and improve the detection precision of the model.
There are two acquisition modes for DWI images with different b values: one is to take the image of the corresponding b value directly from the scanning device, and the other is by weighting without diffusion (b ═ 0 s/mm)2) The specific formula of the DWI image is as follows:
S(b)=S(0)e-b×ADC
in the formula, S (0) represents a DWI image without diffusion weighting, S (b) represents a DWI image with a corresponding b value estimated, and ADC represents an apparent diffusion coefficient. Specifically, the b value is 0 to 2000s/mm2B-value or b-value DWI image, ADC image, T2WI image as input data for the model.
S4-2, organ ROI extraction: since the abnormal region area is only a small proportion compared to the whole MRI image, selecting the whole MRI image for the segmentation task may cause severe pixel-level sample imbalance, and thus organ ROI region extraction is required. Specifically, the organ segmentation result obtained in step S3-3 is mapped to the ADC image and the DWI image after registration to obtain organ ROI regions of all sequence images, and a plurality of images are combined by a stitching method to be used as an input of the abnormal region segmentation model.
S4-3, network setting: and inputting the result obtained in the step S4-2 into a pre-trained encoder-decoder encoding and decoding network to obtain the prostate abnormal region segmentation result.
S4-4, result post-processing: and optimizing the segmentation result obtained in the step S4-3 by adopting a graphical method, and resampling the segmentation result obtained by segmenting the network model to the original size of the input data by adopting an interpolation method. Specific graphical optimization methods include denoising, filling holes, and smoothing boundaries.
S5 automatic discrimination and detection of prostate cancer:
the judgment of the existence of the focus result obtained by the prostate focus detection model as the benign and malignant case brings the defects of high false positive ratio and low detection precision, so the invention further optimizes the model, further inputs the detection result of the abnormal prostate region into the classification model to classify the benign and malignant cases so as to improve the diagnosis precision, and simultaneously outputs the detection result of the focus region by combining the classification result to help doctors to quickly position. The method flow is shown in fig. 5, and specifically includes the following steps:
s5-1, MRI image selection: as with step S4-1, one or more b-valued DWI images, ADC images, and T2WI images are selected as input data for the model.
S5-2, ROI extraction: selecting the segmentation result of the prostate abnormal region obtained in the step S4-4 to map to the registered ADC and DWI images, extracting the ROI region according to the size requirement of data input of the classification model, randomly cutting the image with the segmentation result of the abnormal-free region in the range of the prostate to obtain the ROI region with the same size, and combining a plurality of images as the input of the prostate cancer discrimination model by a splicing method
S5-3, feature extraction: in order to obtain the feature vectors for classification, a deep convolutional neural network CNN is used as a feature extractor, the convolutional neural network feature vectors are obtained through a pre-trained network model, in order to improve the characterization capability of the feature vectors, the image omics feature vectors are added, and feature fusion is carried out on the feature vectors and the convolutional neural network feature vectors.
S5-4, classification of benign and malignant: in order to improve the classification accuracy, the classifier is selected to perform a good-malignant secondary classification on the extracted features, and a specific classification method can be a deep learning method or a machine learning method, wherein the machine learning method includes but is not limited to classifiers such as SVM, XGBoost, Lightgbm and the like.
S5-5, outputting a detection result: after the determination result of benign or malignant disease is obtained, for the positive case, the ROI region extraction is further performed on the segmentation result of the prostate abnormal region obtained in step S4-4 as the lesion detection result, and the lesion detection result is output to assist the doctor in performing the localized diagnosis of the lesion.
Claims (7)
1. The deep learning-based automatic prostate cancer detection and classification system is characterized by comprising an automatic detection module for realizing automatic detection of prostate lesion areas by a deep learning-based method and an automatic classification module for realizing automatic classification of the prostate lesion areas detected by the automatic detection module by utilizing machine learning or a classification network, wherein:
the automatic detection module includes:
the data acquisition unit is used for acquiring multi-parameter MRI image data containing a normal sample and a lesion sample, wherein the multi-parameter MRI image data comprises a T2WI image, an ADC image and a DWI image;
the marking unit is used for marking original data aiming at the characteristics of the multi-parameter MRI image, and generating a mask image as a gold standard of model evaluation by utilizing an organ region of interest marked on the T2WI image and a focus region of interest marked on the ADC image and the DWI image;
the image database is used for storing original multi-parameter MRI image data and a labeling result obtained by the labeling unit;
the image registration unit is used for registering the ADC image and the DWI image onto the T2WI image, and ensuring that the organ gold standard marked on the T2WI image is matched with the ADC image and the DWI image;
the image standardization unit is used for carrying out standardization processing on multi-parameter MRI image data;
the image enhancement unit is used for enhancing multi-parameter MRI image data so as to solve the problems of unbalanced data samples and insufficient data samples and ensure the data quantity balance of cancerous samples and normal samples;
the prostate organ segmentation model is used for segmenting a prostate organ region from an image, the input of the prostate organ segmentation model is a T2WI image, the output dimension selects three dimensions of a central zone, a peripheral zone and a background of a prostate, a network of an encoder-decoder structure is selected as a segmentation network of the prostate organ segmentation model, the segmentation network output is converted into a probability by adopting softmax, and the classification of the maximum value of the three output probabilities of each pixel is selected as a pixel classification;
the result post-processing unit optimizes an organ segmentation result output by the prostate organ segmentation model and a prostate abnormal region segmentation result obtained by the prostate abnormal region segmentation unit by adopting a graphical method, and resamples the organ segmentation result obtained by the prostate organ segmentation model or the prostate abnormal region segmentation result to the original size of input data by adopting an interpolation method to obtain a final organ segmentation result or a final prostate abnormal region segmentation result;
a dividing unit for dividing abnormal prostate region to obtain b value of 0-2000 s/mm2Mapping the organ segmentation result to the ADC image and the DWI image which are registered by the image registration unit to obtain organ interested regions of all sequence images, and combining a plurality of images as the input of an abnormal region segmentation model by a splicing method, wherein the abnormal region segmentation model adopts a pre-trained encoder-decoder coding and decoding network to obtain a prostate abnormal region segmentation result;
the input data of the automatic classification module are DWI images, ADC images and T2WI images of one or more b values, the segmentation result of the abnormal prostate region obtained by the abnormal prostate region segmentation unit and the result post-processing unit is selected to be mapped into the ADC images and DWI images which are registered by the image registration unit for region-of-interest extraction, the images which are processed by the abnormal prostate region segmentation unit and the result post-processing unit and have no abnormal region are randomly cut in the range of the prostate to obtain the regions-of-interest with the same size, the images are jointly input into the feature extractor by a splicing method, the features extracted by the feature extractor are classified by the classifier into a good class and a bad class, the discrimination result of the good class and the bad class is obtained, and the cases with positive result are input into the abnormal prostate region segmentation unit and the result post-processing unit for region-of-interest extraction and are output as a focus detection result, assisting doctors to carry out positioning diagnosis of focus.
2. The deep learning-based automatic prostate cancer detection and classification system of claim 1, wherein the data collected by the data collection unit is desensitized to remove patient personal information.
3. The deep learning-based automatic prostate cancer detection and classification system according to claim 1, wherein said labeling unit performs prostate organ boundary labeling in said T2WI image, and said labeling unit performs lesion boundary labeling on said ADC image and DWI image. And simultaneously labeling benign and malignant classification labels by using multi-parameter MRI images.
4. The deep learning-based automatic prostate cancer detection and classification system according to claim 1, wherein the image normalization unit performs image normalization, gray value normalization and spatial normalization on the MRI images, wherein the gray value normalization is used for normalizing the gray values of the acquired MRI images to be within a uniform range; spatial normalization is used to resample MRI images to a uniform size.
5. The deep learning-based automatic prostate cancer detection and classification system according to claim 1, wherein the prostate abnormal region segmentation unit obtains DWI images of corresponding b values directly from a scanning device or calculates DWI images of corresponding b values from DWI images without diffusion weighting, wherein the b values of DWI images without diffusion weighting are 0s/mm2The calculation formula is as follows: s (b) ═ S (0) e-b×ADCIn the formula, S (0) represents a DWI image without diffusion weighting, S (b) represents a DWI image with a corresponding b value estimated, and ADC represents an apparent diffusion coefficient.
6. The deep learning-based automatic prostate cancer detection and classification system according to claim 1, wherein the feature extractor of said automatic classification module employs a deep Convolutional Neural Network (CNN).
7. The deep learning based prostate cancer automatic detection and classification system of claim 1, wherein said automatic classification module classifier input is the feature obtained by neural network feature extractor and the feature fused with the image omics feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911146224.2A CN111028206A (en) | 2019-11-21 | 2019-11-21 | Prostate cancer automatic detection and classification system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911146224.2A CN111028206A (en) | 2019-11-21 | 2019-11-21 | Prostate cancer automatic detection and classification system based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111028206A true CN111028206A (en) | 2020-04-17 |
Family
ID=70201663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911146224.2A Pending CN111028206A (en) | 2019-11-21 | 2019-11-21 | Prostate cancer automatic detection and classification system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111028206A (en) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539947A (en) * | 2020-04-30 | 2020-08-14 | 上海商汤智能科技有限公司 | Image detection method, training method of related model, related device and equipment |
CN111798425A (en) * | 2020-06-30 | 2020-10-20 | 天津大学 | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning |
CN111798410A (en) * | 2020-06-01 | 2020-10-20 | 深圳市第二人民医院(深圳市转化医学研究院) | Cancer cell pathological grading method, device, equipment and medium based on deep learning model |
CN111881724A (en) * | 2020-06-12 | 2020-11-03 | 山东师范大学 | A classification system for esophageal varices based on LightGBM and feature fusion |
CN111986148A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Quick Gleason scoring system for digital pathological image of prostate |
CN112069795A (en) * | 2020-08-28 | 2020-12-11 | 平安科技(深圳)有限公司 | Corpus detection method, apparatus, device and medium based on mask language model |
CN112465779A (en) * | 2020-11-26 | 2021-03-09 | 中国科学院苏州生物医学工程技术研究所 | Full-automatic detecting and cutting method and system for choledocystic focus in abdominal CT |
CN112489788A (en) * | 2020-11-25 | 2021-03-12 | 武汉大学中南医院 | Multi-modal image analysis method and system for cancer diagnosis |
CN112651951A (en) * | 2020-12-30 | 2021-04-13 | 深圳高性能医疗器械国家研究院有限公司 | DCE-MRI-based breast cancer classification method |
CN113487621A (en) * | 2021-05-25 | 2021-10-08 | 平安科技(深圳)有限公司 | Medical image grading method and device, electronic equipment and readable storage medium |
CN113723461A (en) * | 2021-08-02 | 2021-11-30 | 逸超科技(北京)有限公司 | Ultrasound apparatus and ultrasound image analysis method |
CN114596258A (en) * | 2022-01-20 | 2022-06-07 | 湖南中医药大学 | Prostate cancer classification detection system based on deep learning |
CN114862873A (en) * | 2022-05-09 | 2022-08-05 | 李洁 | CT image segmentation processing method and device |
CN114897900A (en) * | 2022-07-13 | 2022-08-12 | 山东奥洛瑞医疗科技有限公司 | Mammary gland nuclear magnetic resonance image focus positioning method based on artificial intelligence |
CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Method, system and equipment for partitioning prostate |
CN115953420A (en) * | 2023-03-15 | 2023-04-11 | 深圳市联影高端医疗装备创新研究院 | Deep learning network model and medical image segmentation method, device and system |
US20230186463A1 (en) * | 2021-12-09 | 2023-06-15 | Merative Us L.P. | Estimation of b-value in prostate magnetic resonance diffusion weighted images |
CN116386902A (en) * | 2023-04-24 | 2023-07-04 | 北京透彻未来科技有限公司 | Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning |
US11963788B2 (en) | 2021-12-17 | 2024-04-23 | City University Of Hong Kong | Graph-based prostate diagnosis network and method for using the same |
US11986285B2 (en) | 2020-10-29 | 2024-05-21 | National Taiwan University | Disease diagnosing method and disease diagnosing system |
CN118098522A (en) * | 2024-04-28 | 2024-05-28 | 北方健康医疗大数据科技有限公司 | Medical data labeling method, system and medium based on large model |
CN119014954A (en) * | 2024-09-29 | 2024-11-26 | 北京大学第一医院(北京大学第一临床医学院) | A method, device and program product for prostate puncture biopsy |
CN119048593A (en) * | 2024-08-07 | 2024-11-29 | 北京科技大学 | Shoulder cyst positioning method and system based on magnetic resonance imaging image |
CN119477806A (en) * | 2024-10-16 | 2025-02-18 | 华东数字医学工程研究院 | A lesion detection method and system based on image recognition |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778005A (en) * | 2016-12-27 | 2017-05-31 | 中南民族大学 | Prostate cancer computer aided detection method and system based on multi-parameter MRI |
CN107133638A (en) * | 2017-04-01 | 2017-09-05 | 中南民族大学 | Multi-parameter MRI prostate cancer CAD method and system based on two graders |
US20180240233A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Deep Convolutional Encoder-Decoder for Prostate Cancer Detection and Classification |
CN108898608A (en) * | 2018-05-28 | 2018-11-27 | 广东技术师范学院 | A kind of prostate ultrasonic image division method and equipment |
CN109636813A (en) * | 2018-12-14 | 2019-04-16 | 中南民族大学 | The dividing method and system of prostate magnetic resonance image |
CN110111296A (en) * | 2019-01-30 | 2019-08-09 | 北京慧脑云计算有限公司 | The automatic segmenting system of small infarct lesion and its method under the new hair cortex of deep learning |
CN110188792A (en) * | 2019-04-18 | 2019-08-30 | 万达信息股份有限公司 | The characteristics of image acquisition methods of prostate MRI 3-D image |
CN110415234A (en) * | 2019-07-29 | 2019-11-05 | 北京航空航天大学 | Brain Tumor Segmentation Method Based on Multiparameter Magnetic Resonance Imaging |
US20190370965A1 (en) * | 2017-02-22 | 2019-12-05 | The United States Of America, As Represented By The Secretary, Department Of Health And Human Servic | Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks |
-
2019
- 2019-11-21 CN CN201911146224.2A patent/CN111028206A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778005A (en) * | 2016-12-27 | 2017-05-31 | 中南民族大学 | Prostate cancer computer aided detection method and system based on multi-parameter MRI |
US20180240233A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Deep Convolutional Encoder-Decoder for Prostate Cancer Detection and Classification |
US20190370965A1 (en) * | 2017-02-22 | 2019-12-05 | The United States Of America, As Represented By The Secretary, Department Of Health And Human Servic | Detection of prostate cancer in multi-parametric mri using random forest with instance weighting & mr prostate segmentation by deep learning with holistically-nested networks |
CN107133638A (en) * | 2017-04-01 | 2017-09-05 | 中南民族大学 | Multi-parameter MRI prostate cancer CAD method and system based on two graders |
CN108898608A (en) * | 2018-05-28 | 2018-11-27 | 广东技术师范学院 | A kind of prostate ultrasonic image division method and equipment |
CN109636813A (en) * | 2018-12-14 | 2019-04-16 | 中南民族大学 | The dividing method and system of prostate magnetic resonance image |
CN110111296A (en) * | 2019-01-30 | 2019-08-09 | 北京慧脑云计算有限公司 | The automatic segmenting system of small infarct lesion and its method under the new hair cortex of deep learning |
CN110188792A (en) * | 2019-04-18 | 2019-08-30 | 万达信息股份有限公司 | The characteristics of image acquisition methods of prostate MRI 3-D image |
CN110415234A (en) * | 2019-07-29 | 2019-11-05 | 北京航空航天大学 | Brain Tumor Segmentation Method Based on Multiparameter Magnetic Resonance Imaging |
Non-Patent Citations (4)
Title |
---|
RUIMING CAO ET AL: "Joint Prostate Cancer Detection and Gleason Score Prediction in mp-MRI via FocalNet", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》 * |
RUIMING CAO ET AL: "Joint Prostate Cancer Detection and Gleason Score Prediction in mp-MRI via FocalNet", 《IEEE TRANSACTIONS ON MEDICAL IMAGING》, vol. 38, no. 11, 27 January 2019 (2019-01-27), pages 2496 - 2506, XP011755651, DOI: 10.1109/TMI.2019.2901928 * |
杨振森: "基于SVM的前列腺超声图像病变分析", 《中国医疗器械杂志》 * |
杨振森: "基于SVM的前列腺超声图像病变分析", 《中国医疗器械杂志》, vol. 32, no. 6, 30 November 2008 (2008-11-30), pages 398 - 401 * |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539947B (en) * | 2020-04-30 | 2024-03-29 | 上海商汤智能科技有限公司 | Image detection method, related model training method, related device and equipment |
CN111539947A (en) * | 2020-04-30 | 2020-08-14 | 上海商汤智能科技有限公司 | Image detection method, training method of related model, related device and equipment |
CN111798410A (en) * | 2020-06-01 | 2020-10-20 | 深圳市第二人民医院(深圳市转化医学研究院) | Cancer cell pathological grading method, device, equipment and medium based on deep learning model |
CN111881724A (en) * | 2020-06-12 | 2020-11-03 | 山东师范大学 | A classification system for esophageal varices based on LightGBM and feature fusion |
CN111881724B (en) * | 2020-06-12 | 2022-06-28 | 山东师范大学 | Esophageal varices classification system based on LightGBM and feature fusion |
CN111798425B (en) * | 2020-06-30 | 2022-05-27 | 天津大学 | Intelligent detection method of mitotic figures in gastrointestinal stromal tumors based on deep learning |
CN111798425A (en) * | 2020-06-30 | 2020-10-20 | 天津大学 | Intelligent detection method for mitotic image in gastrointestinal stromal tumor based on deep learning |
CN111986148A (en) * | 2020-07-15 | 2020-11-24 | 万达信息股份有限公司 | Quick Gleason scoring system for digital pathological image of prostate |
CN111986148B (en) * | 2020-07-15 | 2024-03-08 | 万达信息股份有限公司 | Quick Gleason scoring system for digital pathology image of prostate |
CN112069795A (en) * | 2020-08-28 | 2020-12-11 | 平安科技(深圳)有限公司 | Corpus detection method, apparatus, device and medium based on mask language model |
US11986285B2 (en) | 2020-10-29 | 2024-05-21 | National Taiwan University | Disease diagnosing method and disease diagnosing system |
CN112489788B (en) * | 2020-11-25 | 2024-06-07 | 武汉大学中南医院 | Multi-mode image analysis method and system for cancer diagnosis |
CN112489788A (en) * | 2020-11-25 | 2021-03-12 | 武汉大学中南医院 | Multi-modal image analysis method and system for cancer diagnosis |
CN112465779B (en) * | 2020-11-26 | 2024-02-27 | 中国科学院苏州生物医学工程技术研究所 | Full-automatic detection and segmentation method and system for choledocholithiasis focus in abdomen CT |
CN112465779A (en) * | 2020-11-26 | 2021-03-09 | 中国科学院苏州生物医学工程技术研究所 | Full-automatic detecting and cutting method and system for choledocystic focus in abdominal CT |
CN112651951A (en) * | 2020-12-30 | 2021-04-13 | 深圳高性能医疗器械国家研究院有限公司 | DCE-MRI-based breast cancer classification method |
CN113487621A (en) * | 2021-05-25 | 2021-10-08 | 平安科技(深圳)有限公司 | Medical image grading method and device, electronic equipment and readable storage medium |
CN113487621B (en) * | 2021-05-25 | 2024-07-12 | 山东省眼科研究所 | Medical image grading method, device, electronic equipment and readable storage medium |
CN113723461A (en) * | 2021-08-02 | 2021-11-30 | 逸超科技(北京)有限公司 | Ultrasound apparatus and ultrasound image analysis method |
US20230186463A1 (en) * | 2021-12-09 | 2023-06-15 | Merative Us L.P. | Estimation of b-value in prostate magnetic resonance diffusion weighted images |
US12333714B2 (en) * | 2021-12-09 | 2025-06-17 | Merative Us L.P. | Estimation of b-value in prostate magnetic resonance diffusion weighted images |
US11963788B2 (en) | 2021-12-17 | 2024-04-23 | City University Of Hong Kong | Graph-based prostate diagnosis network and method for using the same |
CN114596258A (en) * | 2022-01-20 | 2022-06-07 | 湖南中医药大学 | Prostate cancer classification detection system based on deep learning |
CN114862873A (en) * | 2022-05-09 | 2022-08-05 | 李洁 | CT image segmentation processing method and device |
CN114862873B (en) * | 2022-05-09 | 2025-06-10 | 北京国诚润佳科技发展有限责任公司 | CT image segmentation processing method and device |
CN114897900A (en) * | 2022-07-13 | 2022-08-12 | 山东奥洛瑞医疗科技有限公司 | Mammary gland nuclear magnetic resonance image focus positioning method based on artificial intelligence |
CN115619810A (en) * | 2022-12-19 | 2023-01-17 | 中国医学科学院北京协和医院 | Method, system and equipment for partitioning prostate |
CN115619810B (en) * | 2022-12-19 | 2023-10-03 | 中国医学科学院北京协和医院 | A prostate segmentation method, system and equipment |
CN115953420A (en) * | 2023-03-15 | 2023-04-11 | 深圳市联影高端医疗装备创新研究院 | Deep learning network model and medical image segmentation method, device and system |
CN115953420B (en) * | 2023-03-15 | 2023-08-22 | 深圳市联影高端医疗装备创新研究院 | Deep learning network model and medical image segmentation method, device and system |
CN116386902B (en) * | 2023-04-24 | 2023-12-19 | 北京透彻未来科技有限公司 | Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning |
CN116386902A (en) * | 2023-04-24 | 2023-07-04 | 北京透彻未来科技有限公司 | Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning |
CN118098522A (en) * | 2024-04-28 | 2024-05-28 | 北方健康医疗大数据科技有限公司 | Medical data labeling method, system and medium based on large model |
CN119048593A (en) * | 2024-08-07 | 2024-11-29 | 北京科技大学 | Shoulder cyst positioning method and system based on magnetic resonance imaging image |
CN119014954A (en) * | 2024-09-29 | 2024-11-26 | 北京大学第一医院(北京大学第一临床医学院) | A method, device and program product for prostate puncture biopsy |
CN119477806A (en) * | 2024-10-16 | 2025-02-18 | 华东数字医学工程研究院 | A lesion detection method and system based on image recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028206A (en) | Prostate cancer automatic detection and classification system based on deep learning | |
Zebari et al. | Improved threshold based and trainable fully automated segmentation for breast cancer boundary and pectoral muscle in mammogram images | |
CN111428709B (en) | Image processing method, device, computer equipment and storage medium | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
George et al. | Remote computer-aided breast cancer detection and diagnosis system based on cytological images | |
WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
CN111553892B (en) | Lung nodule segmentation calculation method, device and system based on deep learning | |
CN112102237A (en) | Brain tumor recognition model training method and device based on semi-supervised learning | |
CN112380900A (en) | Deep learning-based cervical fluid-based cell digital image classification method and system | |
Xu et al. | Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients | |
KR102620046B1 (en) | Method and system for breast ultrasonic image diagnosis using weakly-supervised deep learning artificial intelligence | |
Székely et al. | A hybrid system for detecting masses in mammographic images | |
CN113870194B (en) | Breast tumor ultrasonic image processing device with fusion of deep layer characteristics and shallow layer LBP characteristics | |
CN112545562A (en) | Multimodal multiparameter breast cancer screening system, device and computer storage medium | |
CN116524315A (en) | A Method for Recognition and Segmentation of Lung Cancer Pathological Tissue Slices Based on Mask R-CNN | |
Sreelekshmi et al. | SwinCNN: an integrated Swin transformer and CNN for improved breast Cancer grade classification | |
Priyadharshini et al. | Artificial intelligence assisted improved design to predict brain tumor on earlier stages using deep learning principle | |
Nurmaini et al. | An improved semantic segmentation with region proposal network for cardiac defect interpretation | |
CN117974588A (en) | A deep learning model, construction method and system based on multimodal ultrasound images | |
Luo et al. | Automatic quality assessment for 2D fetal sonographic standard plane based on multi-task learning | |
CN119785172A (en) | AI-based automated liver cancer tissue recognition and diagnosis system | |
Wang et al. | Localization and risk stratification of thyroid nodules in ultrasound images through deep learning | |
Hasan et al. | Real-time segmentation and classification of whole-slide images for tumor biomarker scoring | |
CN112686912B (en) | Segmentation of acute stroke lesions based on step-by-step learning and mixed samples | |
Chen et al. | What can machine vision do for lymphatic histopathology image analysis: a comprehensive review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200417 |
|
RJ01 | Rejection of invention patent application after publication |