CN117352164A - Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method - Google Patents
Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method Download PDFInfo
- Publication number
- CN117352164A CN117352164A CN202311463809.3A CN202311463809A CN117352164A CN 117352164 A CN117352164 A CN 117352164A CN 202311463809 A CN202311463809 A CN 202311463809A CN 117352164 A CN117352164 A CN 117352164A
- Authority
- CN
- China
- Prior art keywords
- tumor
- information
- symptom
- image feature
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010028980 Neoplasm Diseases 0.000 title claims abstract description 119
- 238000001514 detection method Methods 0.000 title claims abstract description 37
- 238000003745 diagnosis Methods 0.000 title claims abstract description 27
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 22
- 238000003672 processing method Methods 0.000 title abstract description 5
- 239000013598 vector Substances 0.000 claims abstract description 80
- 208000024891 symptom Diseases 0.000 claims abstract description 50
- 238000003062 neural network model Methods 0.000 claims abstract description 37
- 238000003384 imaging method Methods 0.000 claims abstract description 26
- 210000000056 organ Anatomy 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 13
- 230000002503 metabolic effect Effects 0.000 claims description 13
- 210000003484 anatomy Anatomy 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 10
- 238000003066 decision tree Methods 0.000 claims description 8
- 238000012216 screening Methods 0.000 claims description 8
- 230000004060 metabolic process Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 201000010099 disease Diseases 0.000 claims description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000013500 data storage Methods 0.000 claims description 4
- 230000007246 mechanism Effects 0.000 claims description 4
- 238000011282 treatment Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims 3
- 231100000915 pathological change Toxicity 0.000 claims 1
- 230000036285 pathological change Effects 0.000 claims 1
- 238000002059 diagnostic imaging Methods 0.000 abstract description 13
- 238000005516 engineering process Methods 0.000 abstract description 7
- 230000010365 information processing Effects 0.000 abstract description 2
- 239000000284 extract Substances 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Epidemiology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Primary Health Care (AREA)
- Pathology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
本发明公开了基于人工智能的多模态肿瘤检测诊断平台及其处理方法,涉及信息处理技术领域,通过构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定,基于人工智能的神经网络模型对患者的基本信息、主诉信息及医学影像图像信息进行综合分析,从而实现对已发生目标肿瘤的待检测患者的肿瘤严重程度等级进行划分判定,可以降低因人工根据影像判定出错的概率,提高肿瘤诊断的准确率。
The invention discloses a multi-modal tumor detection and diagnosis platform based on artificial intelligence and a processing method thereof, and relates to the field of information processing technology. The functional information image feature vector of the organ is used as input, and the fusion model with the decision decision as the output fuses the obtained symptom feature vector and image feature vector, and outputs the tumor severity level decision decision to determine the tumor severity level of the patient to be detected. The neural network model based on artificial intelligence comprehensively analyzes the patient's basic information, chief complaint information and medical imaging information, thereby achieving classification and determination of tumor severity levels for patients who have developed target tumors and is to be detected, which can reduce the need for manual imaging based on Determine the probability of error and improve the accuracy of tumor diagnosis.
Description
技术领域Technical field
本发明涉及信息处理技术领域,特别涉及基于人工智能的多模态肿瘤检测诊断平台及其处理方法。The present invention relates to the field of information processing technology, and in particular to a multi-modal tumor detection and diagnosis platform based on artificial intelligence and its processing method.
背景技术Background technique
多模态影像融合检查技术(PET/CT+MRI)是近年来快速发展的肿瘤检测技术,通过PET提取受检者分子水平的代谢信息图像、CT提取组织结构的解剖学信息图像、MRI提取组织器官功能学信息图像,将不同成像学基础的三种高分辨率影像进行信息整合,可以为疾病诊断提供了丰富的信息资源,实现全身肿瘤查早、查深、查全的目的。Multimodal image fusion examination technology (PET/CT+MRI) is a tumor detection technology that has developed rapidly in recent years. PET extracts metabolic information images at the molecular level of subjects, CT extracts anatomical information images of tissue structures, and MRI extracts tissue Organ functional information images integrate information from three high-resolution images with different imaging foundations, which can provide rich information resources for disease diagnosis and achieve the purpose of early detection, depth detection, and complete detection of systemic tumors.
目前,对于肿瘤严重程度的判定仍需要医生根据自身识别影像的经验和能力来进行,故虽然多模态影像融合检查技术能够为肿瘤检测提供良好的诊断工具,在肿瘤严重程度判定过程中,也很有可能出现因影像识别错误导致对患者肿瘤严重程度的判定上出现错误,影响患者后续的的诊断救治流程。为此,我们提出基于人工智能的多模态肿瘤检测诊断平台及其处理方法。At present, the determination of tumor severity still requires doctors to make based on their own experience and ability to recognize images. Therefore, although multi-modal image fusion examination technology can provide a good diagnostic tool for tumor detection, in the process of determining tumor severity, it is also necessary to determine the severity of tumors. It is very likely that image recognition errors will lead to errors in determining the severity of the patient's tumor, which will affect the patient's subsequent diagnosis and treatment process. To this end, we propose a multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method.
发明内容Contents of the invention
本发明的主要目的在于提供基于人工智能的多模态肿瘤检测诊断平台及其处理方法,可以有效解决背景技术中的问题。The main purpose of the present invention is to provide a multi-modal tumor detection and diagnosis platform based on artificial intelligence and its processing method, which can effectively solve the problems in the background technology.
为实现上述目的,本发明采取的技术方案为,In order to achieve the above object, the technical solution adopted by the present invention is:
基于人工智能的多模态肿瘤检测诊断平台,包括,Multi-modal tumor detection and diagnosis platform based on artificial intelligence, including,
症状信息采集模块,用于根据医疗系统电子病历数据库信息采集待检测患者的基本信息参数及主诉信息参数;The symptom information collection module is used to collect the basic information parameters and main complaint information parameters of the patients to be tested based on the information in the electronic medical record database of the medical system;
症状特征向量提取模块,通过建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量;The symptom feature vector extraction module establishes a first neural network model that takes the basic information parameters and chief complaint information parameters of patients in the medical system's electronic medical record database as input, and uses the medically diagnosed tumor severity level as the output, and compares the established The first neural network model is trained, and the number of hidden layers of the model is adjusted according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, and the patient to be detected and the target tumor are extracted through the constructed first neural network model. associated symptom feature vectors;
影像学图像采集模块,用于获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;The imaging image acquisition module is used to obtain high-resolution images of different imaging studies, including molecular-level metabolic information images of subjects, tissue structure anatomy information images, and functional images of tissues and organs;
图像特征向量提取模块,用于根据获取的不同成像学的高分辨率影像构建医疗影像库,通过对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;The image feature vector extraction module is used to construct a medical imaging library based on the high-resolution images obtained from different imaging studies. By pre-training various high-resolution imaging data from the medical imaging library as input, the second neural network model is obtained. , and train the established second neural network model, adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the second expected value, and extract the to-be-detected data based on the constructed second neural network model The image feature vectors related to the patient and the target tumor include molecular level metabolic information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors;
特征融合模块,用于构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,并根据构建的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策;The feature fusion module is used to build a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output, and The acquired symptom feature vectors and image feature vectors are fused according to the constructed fusion model, and the tumor severity level determination decision is output;
中心检测模块,用于构建肿瘤严重程度判定模型,根据获取的肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定。The central detection module is used to build a tumor severity determination model and determine the tumor severity level of the patient to be tested based on the obtained tumor severity level determination decision.
所述检测平台的检测步骤包括,The detection steps of the detection platform include:
步骤一,症状信息采集模块由医疗系统电子病历数据库中采集待检测患者的基本信息参数及主诉信息参数;Step 1: The symptom information collection module collects the basic information parameters and main complaint information parameters of the patient to be tested from the electronic medical record database of the medical system;
步骤二,筛选与目标肿瘤相关的症状特征,通过症状特征向量提取模块建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量;Step 2: Screen the symptom features related to the target tumor, and use the symptom feature vector extraction module to establish the basic information parameters and chief complaint information parameters of the patients in the electronic medical record database of the medical system as input, and use the medically diagnosed tumor severity level as the output The first neural network model is trained, and the number of hidden layers of the model is adjusted according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, and through the constructed first neural network model A neural network model extracts symptom feature vectors related to the target tumor of the patient to be detected;
步骤三,通过影像学图像采集模块获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;Step 3: Obtain high-resolution images of different imaging studies through the imaging image acquisition module, including molecular-level metabolic information images of the subject, tissue structure and anatomy information images, and functional images of tissues and organs;
步骤四,筛选与目标肿瘤相关的图像特征,通过图像特征向量提取模块根据获取的不同成像学的高分辨率影像构建医疗影像库,对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;Step 4: Screen the image features related to the target tumor, build a medical imaging library based on the high-resolution images of different imaging studies through the image feature vector extraction module, and use various high-resolution imaging data from the medical imaging library as input. Pre-train, obtain the second neural network model, and train the established second neural network model. Adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the second expected value. According to the construction The second neural network model extracts image feature vectors related to the target tumor of the patient to be detected, including molecular level metabolic information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors;
步骤五,特征融合模块构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,并根据构建的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策;Step 5: The feature fusion module constructs a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output, and The acquired symptom feature vectors and image feature vectors are fused according to the constructed fusion model, and the tumor severity level determination decision is output;
步骤六,中心检测模块构建肿瘤严重程度判定模型,根据获取的肿瘤严重程度等级判定决策将待检测患者的肿瘤严重程度等级进行判定划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ中的任一项等级。Step 6: The central detection module builds a tumor severity determination model, and based on the obtained tumor severity level determination decision, the tumor severity level of the patient to be detected is determined and divided into any one of levels I, II, III, and IV.
进一步的,所述诊断平台还包括数据存储模块,所述存储模块与症状信息采集模块及所述影像学图像采集模块连接,用于存储待检测患者的基本信息参数及主诉信息参数和受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像。Further, the diagnosis platform also includes a data storage module, which is connected to the symptom information collection module and the imaging image collection module, and is used to store the basic information parameters and main complaint information parameters of the patient to be detected and the subject. Molecular level metabolic information images, tissue structure anatomy information images and functional images of tissues and organs.
进一步的,所述诊断平台还包括处理器及存储在数据存储模块上并可在处理器上运行的计算机程序。Further, the diagnostic platform also includes a processor and a computer program stored on the data storage module and executable on the processor.
本发明具有如下有益效果,The present invention has the following beneficial effects:
(1)与现有技术相比,本发明技术方案根据医疗系统电子病历数据库信息采集待检测患者的基本信息参数及主诉信息参数,建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量,根据获取的不同成像学的高分辨率影像构建医疗影像库,通过对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定,基于人工智能的神经网络模型对患者的基本信息、主诉信息及医学影像图像信息进行综合分析,从而实现对已发生目标肿瘤的待检测患者的肿瘤严重程度等级进行划分判定,可以降低因人工根据影像判定出错的概率,提高肿瘤诊断的准确率。(1) Compared with the existing technology, the technical solution of the present invention collects the basic information parameters and chief complaint information parameters of the patients to be detected based on the information in the electronic medical record database of the medical system, and establishes the basic information parameters and information parameters of the patients who have developed the disease in the electronic medical record database of the medical system. The first neural network model takes the chief complaint information parameter as input and uses the medically diagnosed tumor severity level as output to extract the symptom feature vectors of the patient to be detected and related to the target tumor, and builds a medical imaging library based on the high-resolution images obtained with different imaging techniques. By pre-training various high-resolution imaging data from the medical imaging library as input, the second neural network model is obtained to extract image feature vectors related to the patient to be detected and the target tumor, and construct an image based on symptom feature vectors and horizontal metabolic information. Feature vectors, tissue structure anatomical information image feature vectors, and tissue and organ functional information image feature vectors are used as inputs. The fusion model with decision-making as the output fuses the obtained symptom feature vectors and image feature vectors to output the tumor severity level. Decision making determines the severity level of the tumor of the patient to be detected. The neural network model based on artificial intelligence comprehensively analyzes the patient's basic information, chief complaint information and medical imaging information, thereby realizing the diagnosis of the tumor of the patient to be detected who has developed the target tumor. Classification and determination of severity levels can reduce the probability of errors due to manual judgment based on images and improve the accuracy of tumor diagnosis.
附图说明Description of drawings
图1为本发明基于人工智能的多模态肿瘤检测诊断平台的结构框图;Figure 1 is a structural block diagram of the multi-modal tumor detection and diagnosis platform based on artificial intelligence of the present invention;
图2为本发明基于人工智能的多模态肿瘤检测诊断平台的检测流程图。Figure 2 is a detection flow chart of the multi-modal tumor detection and diagnosis platform based on artificial intelligence of the present invention.
具体实施方式Detailed ways
下面结合具体实施方式对本发明作进一步的说明,其中,附图仅用于示例性说明,表示的仅是示意图,而非实物图,不能理解为对本发明的限制,为了更好地说明本发明的具体实施方式,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸。The present invention will be further described below in conjunction with the specific embodiments. The drawings are only for illustrative purposes and represent only schematic diagrams rather than actual diagrams. They cannot be understood as limitations of the present invention. In order to better illustrate the present invention, Specific embodiments, some components in the drawings may be omitted, enlarged or reduced, which do not represent the size of the actual product.
实施例1Example 1
如图1所示的基于人工智能的多模态肿瘤检测诊断平台的整体结构图及如图2所示的基于人工智能的多模态肿瘤检测诊断平台的检测流程图。The overall structure diagram of the multi-modal tumor detection and diagnosis platform based on artificial intelligence is shown in Figure 1, and the detection flow chart of the multi-modal tumor detection and diagnosis platform based on artificial intelligence is shown in Figure 2.
检测平台的检测步骤包括,The detection steps of the detection platform include,
步骤一,症状信息采集模块由医疗系统电子病历数据库中采集待检测患者的基本信息参数及主诉信息参数,其中,基本信息参数包括病人的姓名、性别、年龄、职业、婚姻、民族、籍贯、工作单位、现住址;主诉信息参数包括发病情况、病变过程、诊治情况;Step 1: The symptom information collection module collects the basic information parameters and main complaint information parameters of the patient to be tested from the electronic medical record database of the medical system. The basic information parameters include the patient's name, gender, age, occupation, marriage, ethnicity, place of origin, and work. Unit and current address; main complaint information parameters include incidence, disease process, diagnosis and treatment;
步骤二,筛选与目标肿瘤相关的症状特征,其中,与目标肿瘤相关的症状特征的筛选方法的步骤为,Step 2: Screen the symptom features related to the target tumor. The steps of the screening method for the symptom features related to the target tumor are:
步骤21),统计医疗系统电子病历数据库中已发现目标肿瘤病患者的数量;Step 21), count the number of patients with the target tumor disease found in the electronic medical record database of the medical system;
步骤22),采集已发现目标肿瘤病患者的基本信息参数及主诉信息参数,获取所有已发现目标肿瘤病患者的基本信息参数及主诉信息参数中各项症状特征;Step 22), collect basic information parameters and main complaint information parameters of patients with the target tumor disease, and obtain various symptom characteristics in the basic information parameters and main complaint information parameters of all patients with the target tumor disease;
步骤23),以各项症状特征为分类属性构建决策树模型,根据决策树模型计算任一症状特征下的发病概率Pt,计算公式为:其中,N为电子病历数据库中已发现目标肿瘤病患者的数量;Nt为电子病历数据库中已发现目标肿瘤出现第k项症状特征病患者的数量;Step 23), use each symptom feature as a classification attribute to build a decision tree model, and calculate the incidence probability P t of any symptom feature based on the decision tree model. The calculation formula is: Among them, N is the number of patients with the target tumor disease who have been found in the electronic medical record database; N t is the number of patients with the k-th symptom characteristic of the target tumor that has been found in the electronic medical record database;
步骤24),获取发病概率Pt后,利用发病概率Pt的数值创建样本集,并获取样本集中的均值和标准差,利用均值和标准差对数据进行标准化,标准化公式为在此式中z为标准参量,σ为样本数据的方差,μ为样本数据的均值,在完成标准化后,将标准参量利用将数值区间调整至[0,1]之间,利用f(k)的函数值对重合率进行分类,分类的机制为:Step 24), after obtaining the incidence probability P t , use the numerical value of the incidence probability P t to create a sample set, obtain the mean and standard deviation of the sample set, and use the mean and standard deviation to standardize the data. The standardization formula is: In this formula, z is the standard parameter, σ is the variance of the sample data, and μ is the mean of the sample data. After the standardization is completed, the standard parameter is used Adjust the numerical interval to [0,1], and use the function value of f(k) to classify the coincidence rate. The classification mechanism is:
当时,发病概率Pt分类为一级;when When , the incidence probability P t is classified as level one;
当时,发病概率Pt分类为二级;when When , the incidence probability P t is classified as level two;
其中,f(k)min,f(k)max分别为f(k)的函数值的最小值和最大值;Among them, f(k)min and f(k)max are the minimum and maximum values of the function value of f(k) respectively;
步骤25),筛选分类为一级的发病概率Pt所对应的症状特征为与目标肿瘤相关的症状特征;Step 25), screen the symptom features corresponding to the incidence probability P t classified as one level as symptom features related to the target tumor;
通过症状特征向量提取模块建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,其中,肿瘤严重程度等级根据肿瘤评价指标参数进行确定,肿瘤评价指标包括肿瘤的形态指标、大小指标、位置指标及周围组织关系划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ四个不同等级,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,其中,第一期望值的计算公式为,Through the symptom feature vector extraction module, a first neural network model is established that takes the basic information parameters and chief complaint information parameters of patients with the disease in the electronic medical record database of the medical system as input, and uses the medically diagnosed tumor severity level as the output, where the tumor severity The grade is determined based on the tumor evaluation index parameters. The tumor evaluation index includes the morphological index, size index, position index and surrounding tissue relationship of the tumor. It is divided into four different levels: I, II, III and IV. The first neural network model established Carry out training and adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, where the calculation formula of the first expected value is,
其中,E(Y)1表示第一期望值;N1表示第一个神经网络输入样本的数量;f(Xi)1表示第一个神经网络的输出函数;Xi1表示第一个神经网络的第i个输出样本,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量,;Among them, E(Y) 1 represents the first expected value; N 1 represents the number of input samples of the first neural network; f(X i ) 1 represents the output function of the first neural network ; The i-th output sample is used to extract the symptom feature vector of the patient to be detected related to the target tumor through the constructed first neural network model;
步骤三,通过影像学图像采集模块获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;Step 3: Obtain high-resolution images of different imaging studies through the imaging image acquisition module, including molecular-level metabolic information images of the subject, tissue structure and anatomy information images, and functional images of tissues and organs;
步骤四,筛选与目标肿瘤相关的图像特征,其中,与目标肿瘤相关的图像特征的筛选方法为,Step 4: Screen image features related to the target tumor. The screening method for image features related to the target tumor is,
步骤41),统计医疗影像库中已发现目标肿瘤病患者的数量;Step 41), count the number of patients with the target tumor disease found in the medical imaging database;
步骤42),采集已发现目标肿瘤病患者的影像学图像,获取所有已发现目标肿瘤病患者的影像学图像中各项影像特征;Step 42), collect imaging images of patients with the target tumor disease, and obtain various imaging features in the imaging images of all patients with the target tumor disease;
步骤43),以各项影像特征为分类属性构建决策树模型,根据决策树模型计算任一图像特征下的出现概率Pm,计算公式为:其中,N0为医疗影像库中已发现目标肿瘤病患者的数量;Nm为医疗影像库中已发现目标肿瘤出现第m项症状特征病患者的数量;Step 43), use each image feature as a classification attribute to build a decision tree model, and calculate the occurrence probability P m of any image feature based on the decision tree model. The calculation formula is: Among them, N 0 is the number of patients with the target tumor disease who have been found in the medical imaging database; N m is the number of patients with the m-th symptom characteristic of the target tumor that has been found in the medical imaging database;
步骤44),获取出现概率Pm后,利用出现概率Pm的数值创建样本集,并获取样本集中的均值和标准差,利用均值和标准差对数据进行标准化,标准化公式为在此式中z为标准参量,σ为样本数据的方差,μ为样本数据的均值,在完成标准化后,将标准参量利用将数值区间调整至[0,1]之间,利用f(k)的函数值对重合率进行分类,分类的机制为:Step 44), after obtaining the occurrence probability P m , use the numerical value of the occurrence probability P m to create a sample set, obtain the mean and standard deviation of the sample set, and use the mean and standard deviation to standardize the data. The standardization formula is In this formula, z is the standard parameter, σ is the variance of the sample data, and μ is the mean of the sample data. After the standardization is completed, the standard parameter is used Adjust the numerical interval to [0,1], and use the function value of f(k) to classify the coincidence rate. The classification mechanism is:
当时,出现概率Pm分类为一级;when When , the occurrence probability P m is classified as level one;
当时,出现概率Pm分类为二级;when When, the occurrence probability P m is classified as level two;
其中,f(k)min,f(k)max分别为f(k)的函数值的最小值和最大值;Among them, f(k)min and f(k)max are the minimum and maximum values of the function value of f(k) respectively;
步骤45),筛选分类为一级的出现概率Pm所对应的图像特征为与目标肿瘤相关的图像特征,通过图像特征向量提取模块根据获取的不同成像学的高分辨率影像构建医疗影像库,对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,其中,第二期望值的计算公式为,Step 45), screen the image features corresponding to the first-level occurrence probability P m as image features related to the target tumor, and construct a medical imaging library based on the high-resolution images of different imaging studies obtained through the image feature vector extraction module. Pre-train various types of high-resolution imaging data from the medical imaging library as input to obtain a second neural network model, train the established second neural network model, and adjust the model based on the error between the training results and the actual alarm results. The number of hidden layers until the accuracy is no less than the second expected value, where the calculation formula of the second expected value is,
其中,E(Y)2表示第二个期望值;N2表示第二神经网络模型输入样本的数量;f(Xi)2表示第二神经网络模型的输出函数;Xi2表示第二神经网络模型的第i个输出样本,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;Among them, E(Y) 2 represents the second expected value; N 2 represents the number of input samples of the second neural network model; f(X i ) 2 represents the output function of the second neural network model; X i2 represents the second neural network model For the i-th output sample, the image feature vectors related to the patient to be detected and the target tumor are extracted according to the second neural network model constructed, including image feature vectors of molecular level metabolic information, image feature vectors of tissue structure and anatomy information, and functions of tissues and organs. Learning information image feature vector;
步骤五,特征融合模块构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,本实施例中,以无学习成本的拼接特征融合方法为例进行说明,设症状特征向量为X1,水平代谢信息图像特征向量为X2,组织结构解剖学信息图像特征向量为X3,组织器官的功能学信息图像特征向量为X4,将四者进行矩阵堆叠形成输出最终特征X=[X1,X2,X3,X4],在输出融合特征向量后,根据融合向量特征制定并输出肿瘤严重程度等级判定决策;Step 5: The feature fusion module constructs a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output. In the embodiment, the splicing feature fusion method without learning cost is taken as an example. Let the symptom feature vector be X1, the horizontal metabolic information image feature vector be X2, the tissue structure and anatomy information image feature vector be The information image feature vector is X4. The four are matrix stacked to form the final output feature X = [X1,
步骤六,中心检测模块构建肿瘤严重程度判定模型,以阈值法为例,通过设置阈值参考范围,当特征值在某一阈值范围内时判定为某一严重程度等级,根据获取的肿瘤严重程度等级判定决策将待检测患者的肿瘤严重程度等级进行判定划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ中的任一项等级。Step 6: The central detection module builds a tumor severity determination model. Taking the threshold method as an example, by setting the threshold reference range, when the feature value is within a certain threshold range, it is determined to be a certain severity level. According to the obtained tumor severity level The determination decision is to determine the tumor severity level of the patient to be detected and classify it into any one of levels I, II, III, and IV.
以上显示和描述了本发明的基本原理和主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above embodiments. The above embodiments and descriptions only illustrate the principles of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have other aspects. Various changes and modifications are possible, which fall within the scope of the claimed invention. The scope of protection of the present invention is defined by the appended claims and their equivalents.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311463809.3A CN117352164A (en) | 2023-11-06 | 2023-11-06 | Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311463809.3A CN117352164A (en) | 2023-11-06 | 2023-11-06 | Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117352164A true CN117352164A (en) | 2024-01-05 |
Family
ID=89370959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311463809.3A Pending CN117352164A (en) | 2023-11-06 | 2023-11-06 | Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117352164A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117744026A (en) * | 2024-02-18 | 2024-03-22 | 四川省肿瘤医院 | Multi-modal information fusion method and tumor malignancy probability identification system |
CN119963934A (en) * | 2025-04-11 | 2025-05-09 | 中国人民解放军总医院第一医学中心 | Tumor intelligent recognition system and method based on big data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610771A (en) * | 2017-08-23 | 2018-01-19 | 上海电力学院 | A kind of medical science Testing index screening technique based on decision tree |
CN109871396A (en) * | 2019-01-31 | 2019-06-11 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | The normalization fusion method of multisample examination data |
CN113130077A (en) * | 2021-04-30 | 2021-07-16 | 王世宣 | Ovary function age assessment method and device based on artificial neural network |
CN113948211A (en) * | 2021-11-04 | 2022-01-18 | 复旦大学附属中山医院 | A predictive model for noninvasive quantitative assessment of postoperative pancreatic fistula risk before pancreatectomy |
CN115019405A (en) * | 2022-05-27 | 2022-09-06 | 中国科学院计算技术研究所 | Multi-modal fusion-based tumor classification method and system |
CN116740435A (en) * | 2023-06-09 | 2023-09-12 | 武汉工程大学 | Breast cancer ultrasound image classification method based on multi-modal deep learning radiomics |
-
2023
- 2023-11-06 CN CN202311463809.3A patent/CN117352164A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610771A (en) * | 2017-08-23 | 2018-01-19 | 上海电力学院 | A kind of medical science Testing index screening technique based on decision tree |
CN109871396A (en) * | 2019-01-31 | 2019-06-11 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | The normalization fusion method of multisample examination data |
CN113130077A (en) * | 2021-04-30 | 2021-07-16 | 王世宣 | Ovary function age assessment method and device based on artificial neural network |
CN113948211A (en) * | 2021-11-04 | 2022-01-18 | 复旦大学附属中山医院 | A predictive model for noninvasive quantitative assessment of postoperative pancreatic fistula risk before pancreatectomy |
CN115019405A (en) * | 2022-05-27 | 2022-09-06 | 中国科学院计算技术研究所 | Multi-modal fusion-based tumor classification method and system |
CN116740435A (en) * | 2023-06-09 | 2023-09-12 | 武汉工程大学 | Breast cancer ultrasound image classification method based on multi-modal deep learning radiomics |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117744026A (en) * | 2024-02-18 | 2024-03-22 | 四川省肿瘤医院 | Multi-modal information fusion method and tumor malignancy probability identification system |
CN119963934A (en) * | 2025-04-11 | 2025-05-09 | 中国人民解放军总医院第一医学中心 | Tumor intelligent recognition system and method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022188489A1 (en) | Training method and apparatus for multi-mode multi-disease long-tail distribution ophthalmic disease classification model | |
CN112101451B (en) | Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block | |
CN112365464B (en) | GAN-based medical image lesion area weak supervision positioning method | |
EP3964136B1 (en) | System and method for guiding a user in ultrasound assessment of a fetal organ | |
Wu et al. | Automated detection of kidney abnormalities using multi-feature fusion convolutional neural networks | |
CN109543526B (en) | True and false facial paralysis recognition system based on depth difference characteristics | |
CN114724231B (en) | A multimodal intelligent recognition system for glaucoma based on transfer learning | |
CN117352164A (en) | Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method | |
CN113658151B (en) | Breast lesion magnetic resonance image classification method, equipment and readable storage medium | |
Tan et al. | Automated detection of congenital heart disease in fetal ultrasound screening | |
CN116705300A (en) | Medical decision assistance method, system and storage medium based on sign data analysis | |
CN116704305A (en) | Multi-modal and multi-section classification method for echocardiography based on deep learning algorithm | |
Manikandan et al. | Segmentation and detection of pneumothorax using deep learning | |
CN117744026A (en) | Multi-modal information fusion method and tumor malignancy probability identification system | |
CN116681764A (en) | A method and system for selecting standard slices of ultrasonic lesions based on deep learning | |
CN116403053A (en) | A method, medium, and electronic device for identifying tumor cell slice images | |
CN108805181A (en) | A kind of image classification device and sorting technique based on more disaggregated models | |
Gupta et al. | Brain tumor classification using mr images and transfer learning | |
WO2023226217A1 (en) | Microsatellite instability prediction system and construction method therefor, terminal device, and medium | |
CN111626986B (en) | Three-dimensional ultrasonic abdominal wall hernia patch detection method and system based on deep learning | |
CN118553430A (en) | A method for predicting the future state of a fetus based on multimodal information fusion perception | |
CN118429680A (en) | Method and system for identifying and predicting tongue picture full-class label | |
CN114821176B (en) | Viral encephalitis classification system for MR (magnetic resonance) images of children brain | |
CN118379571A (en) | A multi-label tongue image recognition method and system with adjustable screening threshold | |
TWI768288B (en) | Renal function assessment method, renal function assessment system and kidney care device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |