[go: up one dir, main page]

CN117352164A - Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method - Google Patents

Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method Download PDF

Info

Publication number
CN117352164A
CN117352164A CN202311463809.3A CN202311463809A CN117352164A CN 117352164 A CN117352164 A CN 117352164A CN 202311463809 A CN202311463809 A CN 202311463809A CN 117352164 A CN117352164 A CN 117352164A
Authority
CN
China
Prior art keywords
tumor
information
symptom
image feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311463809.3A
Other languages
Chinese (zh)
Inventor
邱宇宸
马香兰
沈洪兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Baitongda Medical Supplies Co ltd
Original Assignee
Jiangsu Baitongda Medical Supplies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Baitongda Medical Supplies Co ltd filed Critical Jiangsu Baitongda Medical Supplies Co ltd
Priority to CN202311463809.3A priority Critical patent/CN117352164A/en
Publication of CN117352164A publication Critical patent/CN117352164A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Pathology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明公开了基于人工智能的多模态肿瘤检测诊断平台及其处理方法,涉及信息处理技术领域,通过构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定,基于人工智能的神经网络模型对患者的基本信息、主诉信息及医学影像图像信息进行综合分析,从而实现对已发生目标肿瘤的待检测患者的肿瘤严重程度等级进行划分判定,可以降低因人工根据影像判定出错的概率,提高肿瘤诊断的准确率。

The invention discloses a multi-modal tumor detection and diagnosis platform based on artificial intelligence and a processing method thereof, and relates to the field of information processing technology. The functional information image feature vector of the organ is used as input, and the fusion model with the decision decision as the output fuses the obtained symptom feature vector and image feature vector, and outputs the tumor severity level decision decision to determine the tumor severity level of the patient to be detected. The neural network model based on artificial intelligence comprehensively analyzes the patient's basic information, chief complaint information and medical imaging information, thereby achieving classification and determination of tumor severity levels for patients who have developed target tumors and is to be detected, which can reduce the need for manual imaging based on Determine the probability of error and improve the accuracy of tumor diagnosis.

Description

基于人工智能的多模态肿瘤检测诊断平台及其处理方法Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method

技术领域Technical field

本发明涉及信息处理技术领域,特别涉及基于人工智能的多模态肿瘤检测诊断平台及其处理方法。The present invention relates to the field of information processing technology, and in particular to a multi-modal tumor detection and diagnosis platform based on artificial intelligence and its processing method.

背景技术Background technique

多模态影像融合检查技术(PET/CT+MRI)是近年来快速发展的肿瘤检测技术,通过PET提取受检者分子水平的代谢信息图像、CT提取组织结构的解剖学信息图像、MRI提取组织器官功能学信息图像,将不同成像学基础的三种高分辨率影像进行信息整合,可以为疾病诊断提供了丰富的信息资源,实现全身肿瘤查早、查深、查全的目的。Multimodal image fusion examination technology (PET/CT+MRI) is a tumor detection technology that has developed rapidly in recent years. PET extracts metabolic information images at the molecular level of subjects, CT extracts anatomical information images of tissue structures, and MRI extracts tissue Organ functional information images integrate information from three high-resolution images with different imaging foundations, which can provide rich information resources for disease diagnosis and achieve the purpose of early detection, depth detection, and complete detection of systemic tumors.

目前,对于肿瘤严重程度的判定仍需要医生根据自身识别影像的经验和能力来进行,故虽然多模态影像融合检查技术能够为肿瘤检测提供良好的诊断工具,在肿瘤严重程度判定过程中,也很有可能出现因影像识别错误导致对患者肿瘤严重程度的判定上出现错误,影响患者后续的的诊断救治流程。为此,我们提出基于人工智能的多模态肿瘤检测诊断平台及其处理方法。At present, the determination of tumor severity still requires doctors to make based on their own experience and ability to recognize images. Therefore, although multi-modal image fusion examination technology can provide a good diagnostic tool for tumor detection, in the process of determining tumor severity, it is also necessary to determine the severity of tumors. It is very likely that image recognition errors will lead to errors in determining the severity of the patient's tumor, which will affect the patient's subsequent diagnosis and treatment process. To this end, we propose a multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method.

发明内容Contents of the invention

本发明的主要目的在于提供基于人工智能的多模态肿瘤检测诊断平台及其处理方法,可以有效解决背景技术中的问题。The main purpose of the present invention is to provide a multi-modal tumor detection and diagnosis platform based on artificial intelligence and its processing method, which can effectively solve the problems in the background technology.

为实现上述目的,本发明采取的技术方案为,In order to achieve the above object, the technical solution adopted by the present invention is:

基于人工智能的多模态肿瘤检测诊断平台,包括,Multi-modal tumor detection and diagnosis platform based on artificial intelligence, including,

症状信息采集模块,用于根据医疗系统电子病历数据库信息采集待检测患者的基本信息参数及主诉信息参数;The symptom information collection module is used to collect the basic information parameters and main complaint information parameters of the patients to be tested based on the information in the electronic medical record database of the medical system;

症状特征向量提取模块,通过建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量;The symptom feature vector extraction module establishes a first neural network model that takes the basic information parameters and chief complaint information parameters of patients in the medical system's electronic medical record database as input, and uses the medically diagnosed tumor severity level as the output, and compares the established The first neural network model is trained, and the number of hidden layers of the model is adjusted according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, and the patient to be detected and the target tumor are extracted through the constructed first neural network model. associated symptom feature vectors;

影像学图像采集模块,用于获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;The imaging image acquisition module is used to obtain high-resolution images of different imaging studies, including molecular-level metabolic information images of subjects, tissue structure anatomy information images, and functional images of tissues and organs;

图像特征向量提取模块,用于根据获取的不同成像学的高分辨率影像构建医疗影像库,通过对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;The image feature vector extraction module is used to construct a medical imaging library based on the high-resolution images obtained from different imaging studies. By pre-training various high-resolution imaging data from the medical imaging library as input, the second neural network model is obtained. , and train the established second neural network model, adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the second expected value, and extract the to-be-detected data based on the constructed second neural network model The image feature vectors related to the patient and the target tumor include molecular level metabolic information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors;

特征融合模块,用于构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,并根据构建的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策;The feature fusion module is used to build a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output, and The acquired symptom feature vectors and image feature vectors are fused according to the constructed fusion model, and the tumor severity level determination decision is output;

中心检测模块,用于构建肿瘤严重程度判定模型,根据获取的肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定。The central detection module is used to build a tumor severity determination model and determine the tumor severity level of the patient to be tested based on the obtained tumor severity level determination decision.

所述检测平台的检测步骤包括,The detection steps of the detection platform include:

步骤一,症状信息采集模块由医疗系统电子病历数据库中采集待检测患者的基本信息参数及主诉信息参数;Step 1: The symptom information collection module collects the basic information parameters and main complaint information parameters of the patient to be tested from the electronic medical record database of the medical system;

步骤二,筛选与目标肿瘤相关的症状特征,通过症状特征向量提取模块建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量;Step 2: Screen the symptom features related to the target tumor, and use the symptom feature vector extraction module to establish the basic information parameters and chief complaint information parameters of the patients in the electronic medical record database of the medical system as input, and use the medically diagnosed tumor severity level as the output The first neural network model is trained, and the number of hidden layers of the model is adjusted according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, and through the constructed first neural network model A neural network model extracts symptom feature vectors related to the target tumor of the patient to be detected;

步骤三,通过影像学图像采集模块获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;Step 3: Obtain high-resolution images of different imaging studies through the imaging image acquisition module, including molecular-level metabolic information images of the subject, tissue structure and anatomy information images, and functional images of tissues and organs;

步骤四,筛选与目标肿瘤相关的图像特征,通过图像特征向量提取模块根据获取的不同成像学的高分辨率影像构建医疗影像库,对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;Step 4: Screen the image features related to the target tumor, build a medical imaging library based on the high-resolution images of different imaging studies through the image feature vector extraction module, and use various high-resolution imaging data from the medical imaging library as input. Pre-train, obtain the second neural network model, and train the established second neural network model. Adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the second expected value. According to the construction The second neural network model extracts image feature vectors related to the target tumor of the patient to be detected, including molecular level metabolic information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors;

步骤五,特征融合模块构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,并根据构建的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策;Step 5: The feature fusion module constructs a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output, and The acquired symptom feature vectors and image feature vectors are fused according to the constructed fusion model, and the tumor severity level determination decision is output;

步骤六,中心检测模块构建肿瘤严重程度判定模型,根据获取的肿瘤严重程度等级判定决策将待检测患者的肿瘤严重程度等级进行判定划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ中的任一项等级。Step 6: The central detection module builds a tumor severity determination model, and based on the obtained tumor severity level determination decision, the tumor severity level of the patient to be detected is determined and divided into any one of levels I, II, III, and IV.

进一步的,所述诊断平台还包括数据存储模块,所述存储模块与症状信息采集模块及所述影像学图像采集模块连接,用于存储待检测患者的基本信息参数及主诉信息参数和受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像。Further, the diagnosis platform also includes a data storage module, which is connected to the symptom information collection module and the imaging image collection module, and is used to store the basic information parameters and main complaint information parameters of the patient to be detected and the subject. Molecular level metabolic information images, tissue structure anatomy information images and functional images of tissues and organs.

进一步的,所述诊断平台还包括处理器及存储在数据存储模块上并可在处理器上运行的计算机程序。Further, the diagnostic platform also includes a processor and a computer program stored on the data storage module and executable on the processor.

本发明具有如下有益效果,The present invention has the following beneficial effects:

(1)与现有技术相比,本发明技术方案根据医疗系统电子病历数据库信息采集待检测患者的基本信息参数及主诉信息参数,建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量,根据获取的不同成像学的高分辨率影像构建医疗影像库,通过对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型对获取的症状特征向量及图像特征向量进行融合,输出肿瘤严重程度等级判定决策对待检测患者的肿瘤严重程度等级进行判定,基于人工智能的神经网络模型对患者的基本信息、主诉信息及医学影像图像信息进行综合分析,从而实现对已发生目标肿瘤的待检测患者的肿瘤严重程度等级进行划分判定,可以降低因人工根据影像判定出错的概率,提高肿瘤诊断的准确率。(1) Compared with the existing technology, the technical solution of the present invention collects the basic information parameters and chief complaint information parameters of the patients to be detected based on the information in the electronic medical record database of the medical system, and establishes the basic information parameters and information parameters of the patients who have developed the disease in the electronic medical record database of the medical system. The first neural network model takes the chief complaint information parameter as input and uses the medically diagnosed tumor severity level as output to extract the symptom feature vectors of the patient to be detected and related to the target tumor, and builds a medical imaging library based on the high-resolution images obtained with different imaging techniques. By pre-training various high-resolution imaging data from the medical imaging library as input, the second neural network model is obtained to extract image feature vectors related to the patient to be detected and the target tumor, and construct an image based on symptom feature vectors and horizontal metabolic information. Feature vectors, tissue structure anatomical information image feature vectors, and tissue and organ functional information image feature vectors are used as inputs. The fusion model with decision-making as the output fuses the obtained symptom feature vectors and image feature vectors to output the tumor severity level. Decision making determines the severity level of the tumor of the patient to be detected. The neural network model based on artificial intelligence comprehensively analyzes the patient's basic information, chief complaint information and medical imaging information, thereby realizing the diagnosis of the tumor of the patient to be detected who has developed the target tumor. Classification and determination of severity levels can reduce the probability of errors due to manual judgment based on images and improve the accuracy of tumor diagnosis.

附图说明Description of drawings

图1为本发明基于人工智能的多模态肿瘤检测诊断平台的结构框图;Figure 1 is a structural block diagram of the multi-modal tumor detection and diagnosis platform based on artificial intelligence of the present invention;

图2为本发明基于人工智能的多模态肿瘤检测诊断平台的检测流程图。Figure 2 is a detection flow chart of the multi-modal tumor detection and diagnosis platform based on artificial intelligence of the present invention.

具体实施方式Detailed ways

下面结合具体实施方式对本发明作进一步的说明,其中,附图仅用于示例性说明,表示的仅是示意图,而非实物图,不能理解为对本发明的限制,为了更好地说明本发明的具体实施方式,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸。The present invention will be further described below in conjunction with the specific embodiments. The drawings are only for illustrative purposes and represent only schematic diagrams rather than actual diagrams. They cannot be understood as limitations of the present invention. In order to better illustrate the present invention, Specific embodiments, some components in the drawings may be omitted, enlarged or reduced, which do not represent the size of the actual product.

实施例1Example 1

如图1所示的基于人工智能的多模态肿瘤检测诊断平台的整体结构图及如图2所示的基于人工智能的多模态肿瘤检测诊断平台的检测流程图。The overall structure diagram of the multi-modal tumor detection and diagnosis platform based on artificial intelligence is shown in Figure 1, and the detection flow chart of the multi-modal tumor detection and diagnosis platform based on artificial intelligence is shown in Figure 2.

检测平台的检测步骤包括,The detection steps of the detection platform include,

步骤一,症状信息采集模块由医疗系统电子病历数据库中采集待检测患者的基本信息参数及主诉信息参数,其中,基本信息参数包括病人的姓名、性别、年龄、职业、婚姻、民族、籍贯、工作单位、现住址;主诉信息参数包括发病情况、病变过程、诊治情况;Step 1: The symptom information collection module collects the basic information parameters and main complaint information parameters of the patient to be tested from the electronic medical record database of the medical system. The basic information parameters include the patient's name, gender, age, occupation, marriage, ethnicity, place of origin, and work. Unit and current address; main complaint information parameters include incidence, disease process, diagnosis and treatment;

步骤二,筛选与目标肿瘤相关的症状特征,其中,与目标肿瘤相关的症状特征的筛选方法的步骤为,Step 2: Screen the symptom features related to the target tumor. The steps of the screening method for the symptom features related to the target tumor are:

步骤21),统计医疗系统电子病历数据库中已发现目标肿瘤病患者的数量;Step 21), count the number of patients with the target tumor disease found in the electronic medical record database of the medical system;

步骤22),采集已发现目标肿瘤病患者的基本信息参数及主诉信息参数,获取所有已发现目标肿瘤病患者的基本信息参数及主诉信息参数中各项症状特征;Step 22), collect basic information parameters and main complaint information parameters of patients with the target tumor disease, and obtain various symptom characteristics in the basic information parameters and main complaint information parameters of all patients with the target tumor disease;

步骤23),以各项症状特征为分类属性构建决策树模型,根据决策树模型计算任一症状特征下的发病概率Pt,计算公式为:其中,N为电子病历数据库中已发现目标肿瘤病患者的数量;Nt为电子病历数据库中已发现目标肿瘤出现第k项症状特征病患者的数量;Step 23), use each symptom feature as a classification attribute to build a decision tree model, and calculate the incidence probability P t of any symptom feature based on the decision tree model. The calculation formula is: Among them, N is the number of patients with the target tumor disease who have been found in the electronic medical record database; N t is the number of patients with the k-th symptom characteristic of the target tumor that has been found in the electronic medical record database;

步骤24),获取发病概率Pt后,利用发病概率Pt的数值创建样本集,并获取样本集中的均值和标准差,利用均值和标准差对数据进行标准化,标准化公式为在此式中z为标准参量,σ为样本数据的方差,μ为样本数据的均值,在完成标准化后,将标准参量利用将数值区间调整至[0,1]之间,利用f(k)的函数值对重合率进行分类,分类的机制为:Step 24), after obtaining the incidence probability P t , use the numerical value of the incidence probability P t to create a sample set, obtain the mean and standard deviation of the sample set, and use the mean and standard deviation to standardize the data. The standardization formula is: In this formula, z is the standard parameter, σ is the variance of the sample data, and μ is the mean of the sample data. After the standardization is completed, the standard parameter is used Adjust the numerical interval to [0,1], and use the function value of f(k) to classify the coincidence rate. The classification mechanism is:

时,发病概率Pt分类为一级;when When , the incidence probability P t is classified as level one;

时,发病概率Pt分类为二级;when When , the incidence probability P t is classified as level two;

其中,f(k)min,f(k)max分别为f(k)的函数值的最小值和最大值;Among them, f(k)min and f(k)max are the minimum and maximum values of the function value of f(k) respectively;

步骤25),筛选分类为一级的发病概率Pt所对应的症状特征为与目标肿瘤相关的症状特征;Step 25), screen the symptom features corresponding to the incidence probability P t classified as one level as symptom features related to the target tumor;

通过症状特征向量提取模块建立以医疗系统电子病历数据库中已发病患者的基本信息参数及主诉信息参数为输入,以医疗诊断的肿瘤严重程度等级为输出的第一神经网络模型,其中,肿瘤严重程度等级根据肿瘤评价指标参数进行确定,肿瘤评价指标包括肿瘤的形态指标、大小指标、位置指标及周围组织关系划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ四个不同等级,并对建立的第一神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第一期望值,其中,第一期望值的计算公式为,Through the symptom feature vector extraction module, a first neural network model is established that takes the basic information parameters and chief complaint information parameters of patients with the disease in the electronic medical record database of the medical system as input, and uses the medically diagnosed tumor severity level as the output, where the tumor severity The grade is determined based on the tumor evaluation index parameters. The tumor evaluation index includes the morphological index, size index, position index and surrounding tissue relationship of the tumor. It is divided into four different levels: I, II, III and IV. The first neural network model established Carry out training and adjust the number of hidden layers of the model according to the error between the training results and the actual alarm results until the accuracy is no less than the first expected value, where the calculation formula of the first expected value is,

其中,E(Y)1表示第一期望值;N1表示第一个神经网络输入样本的数量;f(Xi)1表示第一个神经网络的输出函数;Xi1表示第一个神经网络的第i个输出样本,并通过构建的第一神经网络模型提取待检测患者与目标肿瘤相关的症状特征向量,;Among them, E(Y) 1 represents the first expected value; N 1 represents the number of input samples of the first neural network; f(X i ) 1 represents the output function of the first neural network ; The i-th output sample is used to extract the symptom feature vector of the patient to be detected related to the target tumor through the constructed first neural network model;

步骤三,通过影像学图像采集模块获取不同成像学的高分辨率影像,包括受检者的分子水平代谢信息图像、组织结构解剖学信息图像及组织器官的功能学图像;Step 3: Obtain high-resolution images of different imaging studies through the imaging image acquisition module, including molecular-level metabolic information images of the subject, tissue structure and anatomy information images, and functional images of tissues and organs;

步骤四,筛选与目标肿瘤相关的图像特征,其中,与目标肿瘤相关的图像特征的筛选方法为,Step 4: Screen image features related to the target tumor. The screening method for image features related to the target tumor is,

步骤41),统计医疗影像库中已发现目标肿瘤病患者的数量;Step 41), count the number of patients with the target tumor disease found in the medical imaging database;

步骤42),采集已发现目标肿瘤病患者的影像学图像,获取所有已发现目标肿瘤病患者的影像学图像中各项影像特征;Step 42), collect imaging images of patients with the target tumor disease, and obtain various imaging features in the imaging images of all patients with the target tumor disease;

步骤43),以各项影像特征为分类属性构建决策树模型,根据决策树模型计算任一图像特征下的出现概率Pm,计算公式为:其中,N0为医疗影像库中已发现目标肿瘤病患者的数量;Nm为医疗影像库中已发现目标肿瘤出现第m项症状特征病患者的数量;Step 43), use each image feature as a classification attribute to build a decision tree model, and calculate the occurrence probability P m of any image feature based on the decision tree model. The calculation formula is: Among them, N 0 is the number of patients with the target tumor disease who have been found in the medical imaging database; N m is the number of patients with the m-th symptom characteristic of the target tumor that has been found in the medical imaging database;

步骤44),获取出现概率Pm后,利用出现概率Pm的数值创建样本集,并获取样本集中的均值和标准差,利用均值和标准差对数据进行标准化,标准化公式为在此式中z为标准参量,σ为样本数据的方差,μ为样本数据的均值,在完成标准化后,将标准参量利用将数值区间调整至[0,1]之间,利用f(k)的函数值对重合率进行分类,分类的机制为:Step 44), after obtaining the occurrence probability P m , use the numerical value of the occurrence probability P m to create a sample set, obtain the mean and standard deviation of the sample set, and use the mean and standard deviation to standardize the data. The standardization formula is In this formula, z is the standard parameter, σ is the variance of the sample data, and μ is the mean of the sample data. After the standardization is completed, the standard parameter is used Adjust the numerical interval to [0,1], and use the function value of f(k) to classify the coincidence rate. The classification mechanism is:

时,出现概率Pm分类为一级;when When , the occurrence probability P m is classified as level one;

时,出现概率Pm分类为二级;when When, the occurrence probability P m is classified as level two;

其中,f(k)min,f(k)max分别为f(k)的函数值的最小值和最大值;Among them, f(k)min and f(k)max are the minimum and maximum values of the function value of f(k) respectively;

步骤45),筛选分类为一级的出现概率Pm所对应的图像特征为与目标肿瘤相关的图像特征,通过图像特征向量提取模块根据获取的不同成像学的高分辨率影像构建医疗影像库,对医疗影像库的各类高分辨率影像图像数据为输入进行预训练,得到第二神经网络模型,并对建立的第二神经网络模型进行训练,根据训练结果和实际报警结果的误差调整模型的隐含层数量至准确率不低于第二期望值,其中,第二期望值的计算公式为,Step 45), screen the image features corresponding to the first-level occurrence probability P m as image features related to the target tumor, and construct a medical imaging library based on the high-resolution images of different imaging studies obtained through the image feature vector extraction module. Pre-train various types of high-resolution imaging data from the medical imaging library as input to obtain a second neural network model, train the established second neural network model, and adjust the model based on the error between the training results and the actual alarm results. The number of hidden layers until the accuracy is no less than the second expected value, where the calculation formula of the second expected value is,

其中,E(Y)2表示第二个期望值;N2表示第二神经网络模型输入样本的数量;f(Xi)2表示第二神经网络模型的输出函数;Xi2表示第二神经网络模型的第i个输出样本,根据构建的第二神经网络模型提取待检测患者与目标肿瘤相关的图像特征向量,包括分子水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量;Among them, E(Y) 2 represents the second expected value; N 2 represents the number of input samples of the second neural network model; f(X i ) 2 represents the output function of the second neural network model; X i2 represents the second neural network model For the i-th output sample, the image feature vectors related to the patient to be detected and the target tumor are extracted according to the second neural network model constructed, including image feature vectors of molecular level metabolic information, image feature vectors of tissue structure and anatomy information, and functions of tissues and organs. Learning information image feature vector;

步骤五,特征融合模块构建以症状特征向量、水平代谢信息图像特征向量、组织结构解剖学信息图像特征向量及组织器官的功能学信息图像特征向量为输入,以判定决策为输出的融合模型,本实施例中,以无学习成本的拼接特征融合方法为例进行说明,设症状特征向量为X1,水平代谢信息图像特征向量为X2,组织结构解剖学信息图像特征向量为X3,组织器官的功能学信息图像特征向量为X4,将四者进行矩阵堆叠形成输出最终特征X=[X1,X2,X3,X4],在输出融合特征向量后,根据融合向量特征制定并输出肿瘤严重程度等级判定决策;Step 5: The feature fusion module constructs a fusion model that takes symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomy information image feature vectors, and tissue and organ functional information image feature vectors as inputs, and uses decision-making as the output. In the embodiment, the splicing feature fusion method without learning cost is taken as an example. Let the symptom feature vector be X1, the horizontal metabolic information image feature vector be X2, the tissue structure and anatomy information image feature vector be The information image feature vector is X4. The four are matrix stacked to form the final output feature X = [X1,

步骤六,中心检测模块构建肿瘤严重程度判定模型,以阈值法为例,通过设置阈值参考范围,当特征值在某一阈值范围内时判定为某一严重程度等级,根据获取的肿瘤严重程度等级判定决策将待检测患者的肿瘤严重程度等级进行判定划分为Ⅰ、Ⅱ、Ⅲ、Ⅳ中的任一项等级。Step 6: The central detection module builds a tumor severity determination model. Taking the threshold method as an example, by setting the threshold reference range, when the feature value is within a certain threshold range, it is determined to be a certain severity level. According to the obtained tumor severity level The determination decision is to determine the tumor severity level of the patient to be detected and classify it into any one of levels I, II, III, and IV.

以上显示和描述了本发明的基本原理和主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles and main features of the present invention and the advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above embodiments. The above embodiments and descriptions only illustrate the principles of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have other aspects. Various changes and modifications are possible, which fall within the scope of the claimed invention. The scope of protection of the present invention is defined by the appended claims and their equivalents.

Claims (10)

1. The multi-mode tumor detection and diagnosis platform based on artificial intelligence is characterized by comprising,
the symptom information acquisition module is used for acquiring basic information parameters and complaint information parameters of a patient to be detected according to information of the medical system electronic medical record database;
the symptom feature vector extraction module is used for establishing a first neural network model which takes basic information parameters and complaint information parameters of a patient suffering from a disease in an electronic medical record database of a medical system as input and takes the grade of the severity of a tumor in medical diagnosis as output, training the established first neural network model, adjusting the number of hidden layers of the model to be not lower than a first expected value according to errors of a training result and an actual alarm result, and extracting symptom feature vectors related to a target tumor of the patient to be detected through the established first neural network model;
the imaging image acquisition module is used for acquiring high-resolution images of different imaging, including a molecular level metabolism information image, a tissue structure anatomical information image and a functional image of a tissue organ of a subject;
the image feature vector extraction module is used for constructing a medical image library according to the acquired high-resolution images of different imagers, obtaining a second neural network model by pre-training various high-resolution image data of the medical image library as input, training the established second neural network model, adjusting the number of hidden layers of the model to be not lower than a second expected value according to errors of a training result and an actual alarm result, and extracting image feature vectors related to a target tumor of a patient to be detected according to the established second neural network model, wherein the image feature vectors comprise molecular level metabolic information image feature vectors, tissue structure anatomical information image feature vectors and tissue organ functional information image feature vectors;
the feature fusion module is used for constructing a fusion model taking a symptom feature vector, a horizontal metabolism information image feature vector, a tissue structure anatomical information image feature vector and a tissue organ functional information image feature vector as inputs and taking a decision as output, fusing the acquired symptom feature vector and the image feature vector according to the constructed fusion model, and outputting a tumor severity level decision;
the center detection module is used for constructing a tumor severity judging model and judging the tumor severity grade of the patient to be detected according to the obtained tumor severity grade judging decision.
2. The artificial intelligence based multimodal tumor detection diagnostic platform of claim 1 wherein the first expected value and the second expected value are calculated as,
wherein E (Y) j Represents the j thA desired value; n (N) j Representing the number of input samples of the jth neural network; f (X) i ) j An output function representing a jth neural network; x is X ij Represents the ith output sample of the jth neural network, j=1, 2.
3. The artificial intelligence based multimodal tumor detection diagnostic platform of claim 1 wherein the basic information parameters include patient name, gender, age, occupation, marital, ethnicity, native, work unit, present address; the main complaint information parameters comprise morbidity, pathological changes and diagnosis and treatment conditions.
4. The artificial intelligence based multi-modal tumor detection and diagnosis platform of claim 1 wherein the tumor severity level is determined based on tumor assessment index parameters, the tumor assessment index including a morphology index, a size index, a location index, and a surrounding tissue relationship index of the tumor.
5. The artificial intelligence based multi-modal tumor detection and diagnosis platform of claim 1 wherein the tumor severity level decision classifies the tumor severity of the patient to be detected into four different levels i, ii, iii, iv based on the tumor severity level.
6. The artificial intelligence-based multi-modal tumor detection and diagnosis platform according to claim 1, further comprising a data storage module, wherein the storage module is connected with the symptom information acquisition module and the imaging image acquisition module, and is used for storing basic information parameters and complaint information parameters of a patient to be detected, and molecular level metabolic information images, anatomical information images of tissue structures and functional images of tissue organs of a subject.
7. The artificial intelligence based multimodal tumor detection diagnostic platform of claim 1 further comprising a processor and a computer program stored on the data storage module and executable on the processor, wherein the processor is capable of performing the functions of any of the modules of claim 1 when the processor executes the program.
8. The multi-modal tumor detection and diagnosis platform based on artificial intelligence according to claim 1, wherein the screening method of symptom characteristics related to the target tumor is that,
step 1), counting the number of found target tumor patients in an electronic medical record database of a medical system;
step 2), basic information parameters and complaint information parameters of the found target tumor patients are collected, and various symptom characteristics in the basic information parameters and the complaint information parameters of all the found target tumor patients are obtained;
step 3), constructing a decision tree model by taking each symptom characteristic as a classification attribute, and calculating the probability P of occurrence under any symptom characteristic according to the decision tree model t The calculation formula is as follows:n is the number of the patients with the target tumor found in the electronic medical record database; n (N) t The number of patients with the kth symptom characteristic of the occurrence of the target tumor is found in the electronic medical record database;
step 4), obtaining the incidence probability P t Then, the probability of occurrence P is used t A sample set is created, the mean value and the standard deviation in the sample set are obtained, the data are standardized by the mean value and the standard deviation, and the standardized formula is thatIn the formula, z is a standard parameter, sigma is the variance of sample data, mu is the mean value of the sample data, and after normalization is completed, the standard parameter is utilizedAdjust the value interval to [0 ],1]The coincidence rate is classified by using the function value of f (k), and the classification mechanism is as follows:
when (when)Probability of onset P t Classifying into a first class;
when (when)Probability of onset P t Classifying into a second stage;
wherein f (k) min and f (k) max are the minimum and maximum values of the function values of f (k), respectively;
step 5), screening the incidence probability P classified as first order t The corresponding symptom characteristic is a symptom characteristic related to the target tumor.
9. The multi-modal tumor detection and diagnosis platform based on artificial intelligence according to claim 1, wherein the screening method of image features related to target tumor is that,
step 1), counting the number of found target tumor patients in a medical image library;
step 2), acquiring imaging images of the found target tumor patients, and acquiring various image features in the imaging images of all the found target tumor patients;
step 3), constructing a decision tree model by taking each image feature as a classification attribute, and calculating the occurrence probability P under any image feature according to the decision tree model m The calculation formula is as follows:wherein N is 0 The number of the patients with the target tumor found in the medical image library; n (N) m The number of patients characterized by the m-th symptom of the occurrence of the target tumor in the medical image library;
step 4), obtaining the occurrence probability P m Thereafter, the probability of occurrence P is utilized m Creating a sample set of values of (a) and obtaining the sample setThe data is normalized by the mean and standard deviation, and the normalization formula isIn the formula, z is a standard parameter, sigma is the variance of sample data, mu is the mean value of the sample data, and after normalization is completed, the standard parameter is utilizedAdjust the value interval to [0,1 ]]The coincidence rate is classified by using the function value of f (k), and the classification mechanism is as follows:
when (when)Probability of occurrence P m Classifying into a first class;
when (when)Probability of occurrence P m Classifying into a second stage;
wherein f (k) min and f (k) max are the minimum and maximum values of the function values of f (k), respectively;
step 5), screening the occurrence probability P classified as one level m The corresponding image features are those associated with the target tumor.
10. The artificial intelligence based multi-modality tumor detection and diagnosis platform of claim 1, wherein the detecting step of the detection platform includes,
the method comprises the steps that firstly, a symptom information acquisition module acquires basic information parameters and complaint information parameters of a patient to be detected from an electronic medical record database of a medical system;
screening symptom characteristics related to a target tumor, establishing a first neural network model which takes basic information parameters and main complaint information parameters of a patient suffering from the disease in an electronic medical record database of a medical system as input and the severity level of the tumor in medical diagnosis as output through a symptom characteristic vector extraction module, training the established first neural network model, adjusting the number of hidden layers of the model to be not lower than a first expected value according to the training result and the error of an actual alarm result, and extracting symptom characteristic vectors related to the target tumor of the patient to be detected through the established first neural network model;
step three, obtaining high-resolution images of different imageology through an imaging image acquisition module, wherein the high-resolution images comprise a molecular level metabolism information image, a tissue structure anatomy information image and a functional image of a tissue organ of a subject;
screening image features related to a target tumor, constructing a medical image library according to the acquired high-resolution images of different imagers through an image feature vector extraction module, pre-training various high-resolution image data of the medical image library as input to obtain a second neural network model, training the established second neural network model, adjusting the number of hidden layers of the model to be not lower than a second expected value according to errors of a training result and an actual alarm result, and extracting image feature vectors related to the target tumor of a patient to be detected according to the constructed second neural network model, wherein the image feature vectors comprise molecular level metabolic information image feature vectors, tissue structure anatomical information image feature vectors and tissue organ functional information image feature vectors;
step five, a feature fusion module constructs a fusion model taking symptom feature vectors, horizontal metabolism information image feature vectors, tissue structure anatomical information image feature vectors and functional information image feature vectors of tissues and organs as input and taking decision as output, fuses the acquired symptom feature vectors and the image feature vectors according to the constructed fusion model, and outputs a tumor severity level decision;
step six, the central detection module builds a tumor severity judging model, and judges and classifies the tumor severity grade of the patient to be detected into any one grade of I, II, III and IV according to the obtained tumor severity grade judging decision.
CN202311463809.3A 2023-11-06 2023-11-06 Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method Pending CN117352164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311463809.3A CN117352164A (en) 2023-11-06 2023-11-06 Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311463809.3A CN117352164A (en) 2023-11-06 2023-11-06 Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method

Publications (1)

Publication Number Publication Date
CN117352164A true CN117352164A (en) 2024-01-05

Family

ID=89370959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311463809.3A Pending CN117352164A (en) 2023-11-06 2023-11-06 Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method

Country Status (1)

Country Link
CN (1) CN117352164A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117744026A (en) * 2024-02-18 2024-03-22 四川省肿瘤医院 Multi-modal information fusion method and tumor malignancy probability identification system
CN119963934A (en) * 2025-04-11 2025-05-09 中国人民解放军总医院第一医学中心 Tumor intelligent recognition system and method based on big data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610771A (en) * 2017-08-23 2018-01-19 上海电力学院 A kind of medical science Testing index screening technique based on decision tree
CN109871396A (en) * 2019-01-31 2019-06-11 西南电子技术研究所(中国电子科技集团公司第十研究所) The normalization fusion method of multisample examination data
CN113130077A (en) * 2021-04-30 2021-07-16 王世宣 Ovary function age assessment method and device based on artificial neural network
CN113948211A (en) * 2021-11-04 2022-01-18 复旦大学附属中山医院 A predictive model for noninvasive quantitative assessment of postoperative pancreatic fistula risk before pancreatectomy
CN115019405A (en) * 2022-05-27 2022-09-06 中国科学院计算技术研究所 Multi-modal fusion-based tumor classification method and system
CN116740435A (en) * 2023-06-09 2023-09-12 武汉工程大学 Breast cancer ultrasound image classification method based on multi-modal deep learning radiomics

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610771A (en) * 2017-08-23 2018-01-19 上海电力学院 A kind of medical science Testing index screening technique based on decision tree
CN109871396A (en) * 2019-01-31 2019-06-11 西南电子技术研究所(中国电子科技集团公司第十研究所) The normalization fusion method of multisample examination data
CN113130077A (en) * 2021-04-30 2021-07-16 王世宣 Ovary function age assessment method and device based on artificial neural network
CN113948211A (en) * 2021-11-04 2022-01-18 复旦大学附属中山医院 A predictive model for noninvasive quantitative assessment of postoperative pancreatic fistula risk before pancreatectomy
CN115019405A (en) * 2022-05-27 2022-09-06 中国科学院计算技术研究所 Multi-modal fusion-based tumor classification method and system
CN116740435A (en) * 2023-06-09 2023-09-12 武汉工程大学 Breast cancer ultrasound image classification method based on multi-modal deep learning radiomics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117744026A (en) * 2024-02-18 2024-03-22 四川省肿瘤医院 Multi-modal information fusion method and tumor malignancy probability identification system
CN119963934A (en) * 2025-04-11 2025-05-09 中国人民解放军总医院第一医学中心 Tumor intelligent recognition system and method based on big data

Similar Documents

Publication Publication Date Title
WO2022188489A1 (en) Training method and apparatus for multi-mode multi-disease long-tail distribution ophthalmic disease classification model
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
EP3964136B1 (en) System and method for guiding a user in ultrasound assessment of a fetal organ
Wu et al. Automated detection of kidney abnormalities using multi-feature fusion convolutional neural networks
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
CN114724231B (en) A multimodal intelligent recognition system for glaucoma based on transfer learning
CN117352164A (en) Multimodal tumor detection and diagnosis platform based on artificial intelligence and its processing method
CN113658151B (en) Breast lesion magnetic resonance image classification method, equipment and readable storage medium
Tan et al. Automated detection of congenital heart disease in fetal ultrasound screening
CN116705300A (en) Medical decision assistance method, system and storage medium based on sign data analysis
CN116704305A (en) Multi-modal and multi-section classification method for echocardiography based on deep learning algorithm
Manikandan et al. Segmentation and detection of pneumothorax using deep learning
CN117744026A (en) Multi-modal information fusion method and tumor malignancy probability identification system
CN116681764A (en) A method and system for selecting standard slices of ultrasonic lesions based on deep learning
CN116403053A (en) A method, medium, and electronic device for identifying tumor cell slice images
CN108805181A (en) A kind of image classification device and sorting technique based on more disaggregated models
Gupta et al. Brain tumor classification using mr images and transfer learning
WO2023226217A1 (en) Microsatellite instability prediction system and construction method therefor, terminal device, and medium
CN111626986B (en) Three-dimensional ultrasonic abdominal wall hernia patch detection method and system based on deep learning
CN118553430A (en) A method for predicting the future state of a fetus based on multimodal information fusion perception
CN118429680A (en) Method and system for identifying and predicting tongue picture full-class label
CN114821176B (en) Viral encephalitis classification system for MR (magnetic resonance) images of children brain
CN118379571A (en) A multi-label tongue image recognition method and system with adjustable screening threshold
TWI768288B (en) Renal function assessment method, renal function assessment system and kidney care device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination