[go: up one dir, main page]

CN114202524A - Performance evaluation method and system of multi-modal medical image - Google Patents

Performance evaluation method and system of multi-modal medical image Download PDF

Info

Publication number
CN114202524A
CN114202524A CN202111506251.3A CN202111506251A CN114202524A CN 114202524 A CN114202524 A CN 114202524A CN 202111506251 A CN202111506251 A CN 202111506251A CN 114202524 A CN114202524 A CN 114202524A
Authority
CN
China
Prior art keywords
medical image
modal
classification
feature
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111506251.3A
Other languages
Chinese (zh)
Inventor
丛超
王毅
李晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese Peoples Liberation Army Army Specialized Medical Center
Original Assignee
Chinese Peoples Liberation Army Army Specialized Medical Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese Peoples Liberation Army Army Specialized Medical Center filed Critical Chinese Peoples Liberation Army Army Specialized Medical Center
Priority to CN202111506251.3A priority Critical patent/CN114202524A/en
Publication of CN114202524A publication Critical patent/CN114202524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention relates to the technical field of medical image analysis, and discloses a method and a system for evaluating the performance of multi-modal medical images, which are used for acquiring a case sample set of the same disease and acquiring a multi-modal medical image sequence corresponding to each case sample in the case sample set; selecting a training set and a testing set, wherein the training set and the testing set are endowed with classification labels of the same case level; extracting N single-mode feature vectors from the training set; combining the N single-mode feature vectors to obtain the N single-mode feature vectors
Figure DDA0003404496790000011
Combining seed feature vectors; and correspondingly training a classification model for each feature vector combination, testing by adopting a test set, and evaluating the classification performance of each classification model so as to obtain the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease. The invention solves the technical problem that the prior art lacks more objective reference standard when single-mode images are combined, and makes the single-mode medical image combination more objective, more accurate and more realistic.

Description

Performance evaluation method and system of multi-modal medical image
Technical Field
The invention relates to the technical field of medical image analysis.
Background
In medical imaging, assisted diagnosis for a certain class of diseases usually involves many different types/modalities/sequences of imaging methods. For example, in the diagnosis of breast diseases, classification of ductal carcinoma in situ, invasive ductal carcinoma, lobular carcinoma and non-malignant tumors (fibroadenoma, adenopathy, hyperpathies, etc.) includes multi-modal/multi-sequence image categories such as mammography, breast MRI T2 weighting, T1 panning, enhancement (DCE), DWI, ADC, etc. Another example is the diagnosis of brain gliomas, using a variety of imaging techniques, T1, T2W (especially T2 Flair), DWI perfusion imaging, DSC imaging, arterial spin labeling imaging, etc. Similarly, the auxiliary diagnosis and postoperative evaluation of prostate cancer, rectal cancer, lung cancer and other diseases are also provided.
In the prior art, for the auxiliary diagnosis of a certain kind of diseases, the images of certain modalities selected in advance are used for carrying out auxiliary diagnosis analysis of deep learning. For example, assisted diagnosis of breast cancer is usually treated with DCE and T2W or DWI as subjects, while segmentation for glioma is usually dominated by T2 Flair, combined with T1 or other modalities for image fusion or feature fusion.
However, the prior art lacks a thought for the following problems: 1) why is the combination of images from these modalities chosen without using the combination of images from other modalities? (the answer may be clinical experience and subjective judgment) 2) how different the effect can be obtained if some modality can be substituted by another modality when the medical conditions are limited and images of some modality cannot be obtained? For example, dynamic enhanced scanning (DCE) requires the injection of contrast media into the patient, and has the disadvantages of higher cost and the inability of some patients to be allergic to contrast media; how to get a balanced compromise between therapeutic efficacy and therapeutic cost? 3) During multi-modal analysis (whether by imaging physician's diagnosis or computer-aided diagnosis), if the current modality combination does not assess disease classification (benign/malignant) exactly, which modalities/sequence of images are added to improve the diagnostic accuracy most effectively?
Disclosure of Invention
Aiming at the technical defects, the invention provides a method for evaluating the performance of a multi-modal medical image, which solves the technical problem that the prior art lacks more objective reference standards when single-modal images are combined.
In order to solve the technical problem, the invention provides a method for evaluating the performance of a multi-modal medical image, which comprises the following steps:
acquiring a case sample set of the same disease, and acquiring a multi-modal medical image sequence corresponding to each case sample in the case sample set, wherein the multi-modal medical image sequence comprises N single-modal medical image sequences;
selecting multi-modal medical image sequences corresponding to part of case samples as a training set, selecting multi-modal image sequences corresponding to the rest of case samples as a test set, and endowing the training set and the test set with classification labels of the same case grade;
extracting N single-mode feature vectors from a multi-mode medical image sequence corresponding to the training set;
combining the N single-mode feature vectors to obtain the N single-mode feature vectors
Figure BDA0003404496770000021
Combining seed feature vectors;
correspondingly training a classification model aiming at each feature vector combination, so that each classification model has the capability of identifying the same disease through the corresponding feature vector combination;
and testing by adopting the test set, evaluating the classification performance of each classification model as the classification performance of each feature vector combination corresponding to the same disease, and thus obtaining the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease.
Further, a trained feature extractor is adopted to automatically extract the single-mode feature vector; the training step of the feature extractor comprises:
positively sampling each single modality medical image sequence of the multi-modality medical image sequences in the training set: examining the single-mode medical image sequence, and selecting an image containing a focus as a sampling image;
setting a corresponding disease label according to the focus on each sampling image, and establishing the sampling image with the disease label as a training data set;
pre-training the CNN neural network by adopting ImageNet1000, reserving the network weight of the CNN neural network, and replacing the last softmax active layer with a softmax active layer meeting the category number of the current disease label, thereby constructing a feature extractor;
training the feature extractor using the training data set.
Further, the extraction process of the single-mode feature vector comprises the following steps:
unifying the slice number of each single-mode medical image sequence into S through sampling or interpolation, and enabling the scanning positions of the corresponding slices to be basically consistent;
and inputting the single-mode medical image sequence into the trained feature extractor, and intercepting the output of the last full-connection layer as a feature vector.
The invention also provides a performance evaluation system of the multi-modal medical image, which comprises a multi-modal medical image sequence input module, N parallel single-modal feature extraction channels, a feature combination module,
Figure BDA0003404496770000031
A classification model and a classification performance evaluation module;
the multi-modal medical image sequence input module is used for respectively inputting a multi-modal medical image sequence containing N single-modal medical image sequences into N parallel single-modal feature extraction channels;
the single-mode feature extraction channel sequentially comprises an image preprocessing module and a feature extractor, wherein the feature extractor is obtained by training a CNN neural network by using a training set and has the capability of extracting single-mode features;
the characteristic combination module is used for combining the single-mode characteristic vectors and forming
Figure BDA0003404496770000032
Each feature vector combination is inputted to
Figure BDA0003404496770000033
In each classification model;
the classification model is used for identifying a certain disease type according to the corresponding feature vector combination;
the classification performance evaluation module is used for evaluating the classification performance of the classification model according to the classification result of the classification model on the test set, and the classification performance is used as the classification performance of each corresponding feature vector combination on the same disease, so that the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease is obtained.
Compared with the prior art, the invention has the beneficial effects that:
1. first, the present invention does not focus on the auxiliary classification prediction of disease or the detection of lesion location. But focuses on retrospective analysis (retroactive study), i.e. the problem of the above-mentioned multi-modal analysis is solved: which single-modality medical images should be selected for combination to be more objective, more accurate, and more realistic.
2. The method is end-to-end, only images and classification labels need to be input during model training and multi-modal evaluation, features are extracted through a feature extractor, and other manual intervention is not needed. Image-level classification labels are needed during training, and only case-level classification labels are needed during evaluation. This saves a significant amount of time required for multi-modal assessment.
3. The process of the inventive evaluation is longitudinal. Longitudinal assessment means: clinical diagnostic efficacy assessment for a multimodal combination is only compared to the efficacy of other combinations and not to the accuracy of other algorithms or physician diagnostic methods. The workflow in the evaluation is the same, so that the objectivity of the evaluation process is ensured, and the reliability of the result is improved.
Drawings
FIG. 1 is a flowchart of a method for evaluating the performance of a multi-modality medical image according to the present embodiment;
FIG. 2 is a diagram showing a structure of a Transformer model.
Detailed Description
The present invention will be described in detail with reference to the accompanying drawings.
The invention provides a performance evaluation method of a multi-modal medical image, which is shown in figure 1 (a right-angle box in figure 1 is a method flow, and a round-angle box is data), and comprises the following steps:
1) constructing a sample
Acquiring a case sample set of the same disease, and acquiring a multi-modal medical image sequence corresponding to each case sample in the case sample set, wherein the multi-modal medical image sequence comprises N single-modal medical image sequences;
and performing image preprocessing operation on the input multi-modal/multi-parameter sequence images. The preprocessing step generally includes operations such as cropping, enhancing, denoising, and the like of the image. The purpose is to make the image characteristic, especially the focus characteristic and the background characteristic have more obvious distinction.
Selecting multi-modal medical image sequences corresponding to part of case samples as a training set, selecting multi-modal image sequences corresponding to the rest of case samples as a test set, and endowing the training set and the test set with classification labels of the same case grade; case-level labeling is generally the result obtained by pathology biopsy, calculated as gold standard.
2) Training feature extractor
2D CNN training. The purpose of the training is to generate a feature extractor for the images of the sequence for extracting feature vectors of the images. The step is divided into three steps of slice sampling, label setting and classifier training.
And 2.1, sampling slice. Since multi-modality medical images (MRI/CT) are usually 3D image sequences, while lesions (benign/malignant tumors, etc.) do not usually appear in every image of the sequence, it is necessary to manually examine the 3D sequence and select the image of the sequence containing the lesion as a positive sample.
And 2.2, setting a label. Different labels are set for different types of diseases (classification label at image level, only related to lesion location on image). For example, for multi-modal analysis of breast mpMRI, classification 2 can be defined for non-malignant/malignant tumors, classification 3 can be defined for normal breast/benign (including adenopathy, fibroadenoma, hyperplasia, and other diseases)/malignant tumors, and further classification can be made for different tumor subtypes (ductal carcinoma in situ, invasive ductal carcinoma, lobular carcinoma, etc.). Class balancing needs to be considered when setting labels.
Adopting ImageNet1000 (a data set which is constructed by aiming at solving the problems of overfitting and generalization in machine learning and is obtained from webpage https:// image-net.org.) to pretrain the CNN neural network by virtue of Li Fei flight in Stanford university, reserving the network weight of the CNN neural network, and replacing the last softmax active layer with a softmax active layer meeting the category number of the current disease label so as to construct a feature extractor;
and 2.3, training a classifier. A CNN network is used to train the sampled images and labels. The CNN model can be selected from various common classification models. In the experiment, models such as VGG16/19, ResNet50/101/152, Inception V3, Inception Resnet2, DenseNet169 and DenseNet201 are selected, have slight differences in classification precision and final evaluation precision, and do not influence the final conclusion in general. The training process mainly comprises the steps of pre-training by using ImageNet1000, replacing the last softmax activation layer with softmax meeting the current classification number after preserving the network weight, and then training by using a training data set sampled by us.
3) Extracting single-mode feature vectors
Extracting N single-mode feature vectors from a multi-mode medical image sequence corresponding to a training set, wherein the single-mode feature vectors are automatically extracted by a trained feature extractor; firstly, images are serialized; then, each modality image is sampled/interpolated to have the same slice number (S is taken here), where the scanning positions of the slices corresponding to each modality are generally required to be substantially consistent, so that the feature vector combination of each position can better represent the category of the lesion. Finally, the trained CNN is used for extracting the features of each image, and the specific method is to input the images into a CNN model, and intercept the output of the last full-connection layer as a feature vector (in an experiment, the full-connection layer and the convolution layer are selected for comparison, so that the effect of the full-connection layer is good).
4) Combined feature vector
And (4) permutation and combination of the multi-modal feature vectors. For feature vectors of N modalities, we consider that the order of arrangement does not affect the final evaluation result, so the feature vectors are combined. According to the permutation and combination formula, the combination number of the feature vectors of the N modes is one:
Figure BDA0003404496770000051
5) training classification model
And correspondingly training a classification model aiming at each feature vector combination, so that each classification model has the capability of identifying the same disease through the corresponding feature vector combination.
An N-channel parallel Transformer model is adopted, and the details of the model are shown in FIG. 2. For the feature vector combination with N input modes, a dimensional (S multiplied by N) Transformer model is designed for classification and evaluation (the 3D multi-channel CNN, the multi-channel LSTM and the Transformer model are used for comparison in our experiment, and finally the final classification effect of the Transformer model is found to be the best). Wherein S is the number of slices, and N is the number of channels. The final Transformer channel/slice outputs are sorted by softmax through pooling and full connectivity layers. The Transformer model can be trained with case-level classification labels. One transform model needs to be trained for each permutation and combination, so that training is needed in the link
Figure BDA0003404496770000061
A Transformer dieAnd (4) molding. And finally, evaluating the test set by using the models and comparing the efficiency of different mode combinations.
6) Evaluation of Performance
And testing by adopting the test set, evaluating the classification performance of each classification model as the classification performance of each feature vector combination corresponding to the same disease, and thus obtaining the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease. The classification performance comprises accuracy and/or recall.
The following takes MRI multi-modality medical images of breast cancer as an example
Background and purpose of study
MRI DCE (dynamic enhanced) imaging is currently most effective (high sensitivity) in treating breast cancer; but if there is no condition for DCE imaging to be performed, e.g. high price, intolerance of the patient to contrast agents, etc., can a similar effect be achieved by deep learning classification + multimodality (T1NFS, T2W, DWI …)?
Design of experimental protocol
Image data: 595 cases of breast MRI, including the sequences (8 in total): t1NFS, T2W, Trace weighted b50/b850, ADC, amplified b-value images (BVAL), DCE early/late (reference alignment)
Labeling: taken from the pathology report + imaging report, location (left/right), category Normal/Benign/magnant classification;
image preprocessing: DWI denoising, center/left-right cutting and Z-Score regularization;
2D image classification model: ResNet50, ResNet101, Inception V3, Inception Res2, Dense169, Dense201
3D multi-modal classification model: direct Pool model, LSTM + GAP, Transformer;
training a strategy: for 277 cases, 3: 1, cross validation; the other 318 cases are used as a test set;
results of the experiment
Table 1 results of multi-modality breast MRI assessment.
Figure BDA0003404496770000062
Figure BDA0003404496770000071
DCE in the table is a dynamic enhancer sequence: dynamic Contrast Enhancement; DWI is diffusion weighted imaging: diffusion Weighted Imaging.
Clinical significance
1. The sequence combination with the highest accuracy rate with or without DCE is disclosed;
2. when the focus can not be judged, increasing which sequences can improve the precision (87% -92%).

Claims (8)

1. A method for evaluating the performance of a multi-modal medical image, comprising the steps of:
acquiring a case sample set of the same disease, and acquiring a multi-modal medical image sequence corresponding to each case sample in the case sample set, wherein the multi-modal medical image sequence comprises N single-modal medical image sequences;
selecting multi-modal medical image sequences corresponding to part of case samples as a training set, selecting multi-modal image sequences corresponding to the rest of case samples as a test set, and endowing the training set and the test set with classification labels of the same case grade;
extracting N single-mode feature vectors from a multi-mode medical image sequence corresponding to the training set;
combining the N single-mode feature vectors to obtain the N single-mode feature vectors
Figure FDA0003404496760000011
Combining seed feature vectors;
correspondingly training a classification model aiming at each feature vector combination, so that each classification model has the capability of identifying the same disease through the corresponding feature vector combination;
and testing by adopting the test set, evaluating the classification performance of each classification model as the classification performance of each feature vector combination corresponding to the same disease, and thus obtaining the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease.
2. The method of claim 1, wherein the classification performance comprises accuracy and/or recall.
3. The method for evaluating the performance of multi-modal medical images according to claim 1, wherein a trained feature extractor is used to automatically extract the single-modal feature vectors; the training step of the feature extractor comprises:
positively sampling each single modality medical image sequence of the multi-modality medical image sequences in the training set: examining the single-mode medical image sequence, and selecting an image containing a focus as a sampling image;
setting a corresponding disease label according to the focus on each sampling image, and establishing the sampling image with the disease label as a training data set;
pre-training the CNN neural network by adopting ImageNet1000, reserving the network weight of the CNN neural network, and replacing the last softmax active layer with a softmax active layer meeting the category number of the current disease label, thereby constructing a feature extractor;
training the feature extractor using the training data set.
4. The method for evaluating the performance of multi-modal medical images according to claim 3, wherein the extracting process of the single-modal feature vector comprises the steps of:
unifying the slice number of each single-mode medical image sequence into S through sampling or interpolation, and enabling the scanning positions of the corresponding slices to be basically consistent;
and inputting the single-mode medical image sequence into the trained feature extractor, and intercepting the output of the last full-connection layer as a feature vector.
5. The method of claim 1, wherein the single-modality medical image sequence is pre-processed before feature extraction, and the pre-processing comprises image cropping, enhancement, and denoising.
6. The system for evaluating the performance of the multi-modal medical image is characterized by comprising a multi-modal medical image sequence input module, N parallel single-modal feature extraction channels, a feature combination module,
Figure FDA0003404496760000021
A classification model and a classification performance evaluation module;
the multi-modal medical image sequence input module is used for respectively inputting a multi-modal medical image sequence containing N single-modal medical image sequences into N parallel single-modal feature extraction channels;
the single-mode feature extraction channel sequentially comprises an image preprocessing module and a feature extractor, wherein the feature extractor is obtained by training a CNN neural network by using a training set and has the capability of extracting single-mode features;
the characteristic combination module is used for combining the single-mode characteristic vectors and forming
Figure FDA0003404496760000022
Each feature vector combination is inputted to
Figure FDA0003404496760000023
In each classification model;
the classification model is used for identifying a certain disease type according to the corresponding feature vector combination;
the classification performance evaluation module is used for evaluating the classification performance of the classification model according to the classification result of the classification model on the test set, and the classification performance is used as the classification performance of each corresponding feature vector combination on the same disease, so that the classification performance of a multi-modal medical image formed by combining different single-modal medical images on the same disease is obtained.
7. The system for evaluating the performance of multi-modal medical images according to claim 6, wherein a Transformer model or an LSTM model is used as the classification model.
8. A performance evaluation system of multi-modal medical images according to claim 6, characterized in that VGG16/19, ResNet50/101/152, Inception V3, Inception Resnet2, DenseNet169, or DenseNet201 models are employed as feature extractors.
CN202111506251.3A 2021-12-10 2021-12-10 Performance evaluation method and system of multi-modal medical image Pending CN114202524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111506251.3A CN114202524A (en) 2021-12-10 2021-12-10 Performance evaluation method and system of multi-modal medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111506251.3A CN114202524A (en) 2021-12-10 2021-12-10 Performance evaluation method and system of multi-modal medical image

Publications (1)

Publication Number Publication Date
CN114202524A true CN114202524A (en) 2022-03-18

Family

ID=80652390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111506251.3A Pending CN114202524A (en) 2021-12-10 2021-12-10 Performance evaluation method and system of multi-modal medical image

Country Status (1)

Country Link
CN (1) CN114202524A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820491A (en) * 2022-04-18 2022-07-29 汕头大学 Multi-modal stroke lesion segmentation method and system based on small sample learning
CN115295134A (en) * 2022-09-30 2022-11-04 北方健康医疗大数据科技有限公司 Medical model evaluation method and device and electronic equipment
CN116524248A (en) * 2023-04-17 2023-08-01 首都医科大学附属北京友谊医院 Medical data processing device, method and classification model training device
CN116630680A (en) * 2023-04-06 2023-08-22 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN116703896A (en) * 2023-08-02 2023-09-05 神州医疗科技股份有限公司 Multi-mode-based prostate cancer and hyperplasia prediction system and construction method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231702A (en) * 2008-01-25 2008-07-30 华中科技大学 A Classifier Ensemble Method
CN101517602A (en) * 2006-09-22 2009-08-26 皇家飞利浦电子股份有限公司 Methods for feature selection using classifier ensemble based genetic algorithms
CN105745659A (en) * 2013-09-16 2016-07-06 佰欧迪塞克斯公司 A Classifier Generation Method Using Regularization to Combining Multiple Tiny Classifiers and Its Applications
CN107506797A (en) * 2017-08-25 2017-12-22 电子科技大学 One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN107635189A (en) * 2017-09-15 2018-01-26 中国联合网络通信集团有限公司 A beam selection method and device
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Multi-modal ultrasound omics analysis method and system based on deep learning
CN109558896A (en) * 2018-11-06 2019-04-02 中山大学附属第医院 Disease intelligent analysis method and system based on ultrasound omics and deep learning
CN111582330A (en) * 2020-04-22 2020-08-25 北方民族大学 Integrated ResNet-NRC method for dividing sample space based on lung tumor image
CN111798440A (en) * 2020-07-11 2020-10-20 大连东软教育科技集团有限公司 Medical image artifact automatic identification method, system and storage medium
CN112200016A (en) * 2020-09-17 2021-01-08 东北林业大学 Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
CN112819076A (en) * 2021-02-03 2021-05-18 中南大学 Deep migration learning-based medical image classification model training method and device
CN112884754A (en) * 2021-03-11 2021-06-01 广东工业大学 Multi-modal Alzheimer's disease medical image recognition and classification method and system
CN113077433A (en) * 2021-03-30 2021-07-06 山东英信计算机技术有限公司 Deep learning-based tumor target area cloud detection device, system, method and medium
CN113558634A (en) * 2021-07-26 2021-10-29 西南大学 A data monitoring method, device, electronic device and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517602A (en) * 2006-09-22 2009-08-26 皇家飞利浦电子股份有限公司 Methods for feature selection using classifier ensemble based genetic algorithms
CN101231702A (en) * 2008-01-25 2008-07-30 华中科技大学 A Classifier Ensemble Method
CN105745659A (en) * 2013-09-16 2016-07-06 佰欧迪塞克斯公司 A Classifier Generation Method Using Regularization to Combining Multiple Tiny Classifiers and Its Applications
CN107506797A (en) * 2017-08-25 2017-12-22 电子科技大学 One kind is based on deep neural network and multi-modal image alzheimer disease sorting technique
CN107635189A (en) * 2017-09-15 2018-01-26 中国联合网络通信集团有限公司 A beam selection method and device
CN109558896A (en) * 2018-11-06 2019-04-02 中山大学附属第医院 Disease intelligent analysis method and system based on ultrasound omics and deep learning
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Multi-modal ultrasound omics analysis method and system based on deep learning
CN111582330A (en) * 2020-04-22 2020-08-25 北方民族大学 Integrated ResNet-NRC method for dividing sample space based on lung tumor image
CN111798440A (en) * 2020-07-11 2020-10-20 大连东软教育科技集团有限公司 Medical image artifact automatic identification method, system and storage medium
CN112200016A (en) * 2020-09-17 2021-01-08 东北林业大学 Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost
CN112465058A (en) * 2020-12-07 2021-03-09 中国计量大学 Multi-modal medical image classification method under improved GoogLeNet neural network
CN112819076A (en) * 2021-02-03 2021-05-18 中南大学 Deep migration learning-based medical image classification model training method and device
CN112884754A (en) * 2021-03-11 2021-06-01 广东工业大学 Multi-modal Alzheimer's disease medical image recognition and classification method and system
CN113077433A (en) * 2021-03-30 2021-07-06 山东英信计算机技术有限公司 Deep learning-based tumor target area cloud detection device, system, method and medium
CN113558634A (en) * 2021-07-26 2021-10-29 西南大学 A data monitoring method, device, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YIN DAI 等: "TransMed: Transformers Advance Multi-Modal Medical Image Classification", 《DIAGNOSTICS》 *
丛超 等: "基于深度神经网络的冠脉造影图像的血管狭窄自动定位及分类预测", 《中国生物医学工程学报》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820491A (en) * 2022-04-18 2022-07-29 汕头大学 Multi-modal stroke lesion segmentation method and system based on small sample learning
CN115295134A (en) * 2022-09-30 2022-11-04 北方健康医疗大数据科技有限公司 Medical model evaluation method and device and electronic equipment
CN115295134B (en) * 2022-09-30 2023-03-24 北方健康医疗大数据科技有限公司 Medical model evaluation method and device and electronic equipment
CN116630680A (en) * 2023-04-06 2023-08-22 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN116630680B (en) * 2023-04-06 2024-02-06 南方医科大学南方医院 Dual-mode image classification method and system combining X-ray photography and ultrasound
CN116524248A (en) * 2023-04-17 2023-08-01 首都医科大学附属北京友谊医院 Medical data processing device, method and classification model training device
CN116524248B (en) * 2023-04-17 2024-02-13 首都医科大学附属北京友谊医院 Medical data processing device, method and classification model training device
CN116703896A (en) * 2023-08-02 2023-09-05 神州医疗科技股份有限公司 Multi-mode-based prostate cancer and hyperplasia prediction system and construction method
CN116703896B (en) * 2023-08-02 2023-10-24 神州医疗科技股份有限公司 Multi-mode-based prostate cancer and hyperplasia prediction system and construction method

Similar Documents

Publication Publication Date Title
Sun et al. Multiparametric MRI and radiomics in prostate cancer: a review
Moon et al. Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks
Han et al. Semi-supervised segmentation of lesion from breast ultrasound images with attentional generative adversarial network
US10489908B2 (en) Deep convolutional encoder-decoder for prostate cancer detection and classification
CN114202524A (en) Performance evaluation method and system of multi-modal medical image
Yang et al. Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI
Lladó et al. Segmentation of multiple sclerosis lesions in brain MRI: a review of automated approaches
EP3432784B1 (en) Deep-learning-based cancer classification using a hierarchical classification framework
Somasundaram et al. Fully automatic brain extraction algorithm for axial T2-weighted magnetic resonance images
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
Jun et al. Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors
CN111047589A (en) An attention-enhanced brain tumor-assisted intelligent detection and recognition method
CN111598864B (en) Liver cell cancer differentiation evaluation method based on multi-modal image contribution fusion
Heydarheydari et al. Auto-segmentation of head and neck tumors in positron emission tomography images using non-local means and morphological frameworks
CN104414636A (en) Magnetic resonance image based cerebral micro-bleeding computer auxiliary detection system
Zhou Modality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation
CN110728239B (en) An automatic recognition system for gastric cancer enhanced CT images using deep learning
US20230162353A1 (en) Multistream fusion encoder for prostate lesion segmentation and classification
CN112950644A (en) Deep learning-based neonatal brain image segmentation method and model construction method
Nguyen-Tat et al. Enhancing brain tumor segmentation in MRI images: A hybrid approach using UNet, attention mechanisms, and transformers
Song et al. Prostate lesion segmentation based on a 3D end-to-end convolution neural network with deep multi-scale attention
US20090069665A1 (en) Automatic Lesion Correlation in Multiple MR Modalities
CN119478097A (en) A medical image synthesis method based on spatial and frequency domain attention mechanism
Delmoral et al. Segmentation of pathological liver tissue with dilated fully convolutional networks: A preliminary study
Sammouda et al. Intelligent Computer‐Aided Prostate Cancer Diagnosis Systems: State‐of‐the‐Art and Future Directions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220318