[go: up one dir, main page]

CN114782307B - Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning - Google Patents

Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning Download PDF

Info

Publication number
CN114782307B
CN114782307B CN202210128818.6A CN202210128818A CN114782307B CN 114782307 B CN114782307 B CN 114782307B CN 202210128818 A CN202210128818 A CN 202210128818A CN 114782307 B CN114782307 B CN 114782307B
Authority
CN
China
Prior art keywords
image
enhanced
lesion
module
rectal cancer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210128818.6A
Other languages
Chinese (zh)
Other versions
CN114782307A (en
Inventor
邹兵兵
万寿红
王万勤
邱晨阳
张翰韬
刘宏武
毕军焱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Medical University
First Affiliated Hospital of Anhui Medical University
University of Science and Technology of China USTC
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Original Assignee
Anhui Medical University
First Affiliated Hospital of Anhui Medical University
University of Science and Technology of China USTC
Institute of Artificial Intelligence of Hefei Comprehensive National Science Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Medical University, First Affiliated Hospital of Anhui Medical University, University of Science and Technology of China USTC, Institute of Artificial Intelligence of Hefei Comprehensive National Science Center filed Critical Anhui Medical University
Priority to CN202210128818.6A priority Critical patent/CN114782307B/en
Publication of CN114782307A publication Critical patent/CN114782307A/en
Application granted granted Critical
Publication of CN114782307B publication Critical patent/CN114782307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Primary Health Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning, which comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discrimination module, a lesion lymph identification module, a lesion feature extraction module, a comprehensive diagnosis module and a visualization module. Aiming at the problems that the structure of the rectal enhancement CT image data is complex, the cancerous region and the stage thereof are difficult to distinguish, the rectal cancer data marking and the data set construction of the rectal cancer enhancement CT image, the discrimination of the rectal cancer lesion region based on a self-attention deep learning model and the identification of the metastatic lymph nodes based on the fusion of sequence self-adaptive characteristics are researched, an intelligent auxiliary diagnosis system for the stage of the rectal cancer is designed and realized, clinical application experiment verification is carried out, and the comprehensive data acquisition precision efficiency before the rectal cancer operation is improved.

Description

Reinforced CT image rectal cancer stage auxiliary diagnosis system based on deep learning
Technical Field
The invention belongs to the technical field of medical image processing, and particularly relates to an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning.
Background
Rectal cancer is the most common malignancy of the digestive system, which generally refers to cancer from the dentate line to the junction of the rectocele, one of the most common malignancy of the digestive tract. The rectal cancer is low in position and is easy to diagnose by a rectal finger and a colonoscope. However, because the position of the device goes deep into the pelvis, the anatomical relationship is complex, the limitation of the examination means is difficult to realize accurate preoperative staged diagnosis. Stage diagnosis of rectal cancer is critical to accurate personalized treatment regimen decisions and ensures the precondition of a good prognosis for the patient. X-ray, MRI and CT are currently used with more imaging techniques. The X-ray technology is suitable for specific bone organs, is not clear for tissue organ examination, and is generally used for diagnosing the rectal cancer by barium enema radiography, but can only reflect the information of the lesion part and the intestinal lumen of the lesion segment. MRI technology has a high soft tissue resolution, but its high price and accessibility to equipment limit its wide clinical application to some extent, and the examination time and imaging time are long. Among various rectal cancer screening means, CT examination is currently the most popular and more effective method, and CT is also currently a relatively widely used imaging examination means for colorectal cancer in clinic. In the detection of the rectal cancer, the enhanced CT image can clearly show the condition that the external muscle layer and the peripheral organs are invaded by tumors, and the size and morphological characteristics of the lymph nodes are also presented. Therefore, CT images become an important reference basis for rectal cancer stage diagnosis, aiming at massive medical image data, benign, malignant and stage identification can be carried out on rectal tumors by constructing a rectal cancer stage diagnosis data set and training a deep learning neural network, so that the work of image doctors is partially replaced, the rapid and accurate identification of rectal cancer clinical TN stages is realized, and the working efficiency is improved.
The invention patent of China patent 'deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features' (application number: 202110879728.6, application day: 2021.08.02) discloses a deep learning colorectal cancer polyp segmentation device based on enhanced multi-scale features, which firstly adjusts and normalizes resolution of colorectal cancer polyp image data, extracts diversity features of polyps by using a feature extractor of a multi-scale residual structure and a receptive field block component capable of capturing the multi-scale receptive field, transmits context information by using intensive multi-scale jump connection to realize segmentation details, completes boundary segmentation by a local context provided attention mechanism, and uses a deep supervision technology for calibration in an upsampling process to reduce gradient disappearance or explosion phenomenon during training. The method solves the problems that small polyps are difficult to distinguish and locate and boundaries between polyps and surrounding tissues are ambiguous, and meanwhile, the gradient of the network model is optimized by introducing a deep supervision mechanism, so that the convergence of the network model is quickened, and the training time of the network model is shortened. Experimental results prove that the feasibility of the attention mechanism provided by the local context for computer-aided diagnosis is achieved, and good recognition effect can be achieved by selecting proper model parameters and model structures. However, the rectal CT structure is further complicated due to the many adjacent organs and vessels around the rectum and the large individual variability in the angle and morphology of the rectum. The device and the method for assisting diagnosis are directly used, lack of utilization of overall global information of the enhanced CT image, can not fully utilize global dependency relations among features for modeling, have low detection accuracy of lesion sites, easily ignore information of weak targets such as lymph nodes and the like, and reduce reliability of stage diagnosis results.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning, which is also an auxiliary diagnosis device, and aims at solving the problems that the structure of rectal enhanced CT image data is complex, a cancerous region and stage thereof are difficult to distinguish, researching the enhancement CT image rectal cancer data annotation and data set construction, discriminating the rectal cancer lesion region based on a self-attention deep learning model and identifying the metastatic lymph node based on sequence self-adaptive feature fusion, designing and realizing the rectal cancer stage intelligent auxiliary diagnosis system, carrying out clinical application experiment verification and improving the comprehensive data acquisition precision efficiency before rectal cancer operation.
In order to achieve the aim, the technical scheme adopted by the invention is that the enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectum lesion area discrimination module, a lesion lymph identification module, a lesion feature extraction module, a comprehensive diagnosis module and a visualization module;
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
The rectal lesion area judging module is used for judging a suspected rectal tumor area and dividing the area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
the focus feature extraction module is used for identifying and collecting feature data of a focus region of the rectal tumor, counting the number of periintestinal lesion lymph nodes and forming a rectal lesion feature parameter set;
The comprehensive diagnosis module is used for fusing identification information of a rectum lesion area, identification information of a lesion lymph node and characteristics of a part to be checked, and combining TN stage priori knowledge base comparison to give a result of TN stage auxiliary diagnosis of the rectum cancer;
The visualization module is used for displaying the input enhanced CT image and labeling the relevant information of the characteristics of the tumor focus area of the rectum on the image.
Further, the focus feature extraction module identifies and collects the feature parameters of the focus region of the rectal tumor, including the thickness of the wall of the focus region of the rectal tumor, the density difference between the tumor and the normal wall, whether burr-shaped protrusions exist and adjacent structures are affected.
Still further, the visualization module is used for displaying the relevant information of the feature of the focal region marked with the rectal cancer on the image, and the relevant information comprises the concentrated position of the rectal cancer, the thickness of the wall of the lesion, the burr protrusion around the lesion, the affected adjacent structure, the lesion lymph node information and TN (total length) stage results marked on the image.
Still further, the system work execution includes the steps of:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
S3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor shape area or not, and segments a lesion area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
s5, extracting the thickness of the wall of a rectal tumor focus area, the density difference between a tumor and a normal wall, the affected appearance characteristics of a kitchen Zhou Maoci-shaped protuberance and an adjacent structure by a focus characteristic extraction module, and counting the number of periintestinal lesion lymph nodes to form a rectal lesion characteristic parameter set;
S6, the comprehensive diagnosis module fuses the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and realizes the clinical TN stage auxiliary diagnosis of the rectal cancer by combining TN stage priori knowledge base comparison.
Still further, in step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectal cancer lesions and suspicious and normal lymph nodes around the intestines corresponding to different rectal cancer stages;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding necessary object information, and constructing a data set.
Still further, in step S2, image data preprocessing is adopted, and the steps include:
S21, uniformly converting the reinforced CT rectal cancer image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT image data format is consistent with the corresponding label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing image resampling, noise reduction, random affine transformation and image channel number addition on the image data and the label data which are established and mapped, and realizing image enhancement.
Still further, in step S3, the discriminating module of the rectal lesion area adopts a self-attention deep learning model, and utilizes a self-attention mechanism to construct a global dependency relationship of CT image features, and captures the overall internal correlation of image data and image features, wherein the discriminating step of the suspected lesion area based on the self-attention mechanism is as follows:
S31, inputting the preprocessed CT image into a feature extraction network, and obtaining a feature map of a corresponding depth level through one or more downsampling and pooling operations;
S32, introducing an attention mechanism, performing corresponding compression on each channel of the feature map, and after compression, obtaining importance degrees corresponding to different channels through the operation of an activation function and converting the importance degrees into attention vectors;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature diagram, and further guiding the attention point of the network to realize the fusion of attention mechanisms;
the specific formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)。
Still further, in step S4, the identifying of the diseased lymph node identifying module adopts a method based on sequence adaptive feature fusion, and uses continuous multi-frame image labeling information to identify the weak lymph node target, wherein the lymph node diagnosis processing step based on sequence adaptive feature fusion includes:
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
s42, the lymph node position information in the feature map is mapped back to the original enhanced CT image.
Still further, in step S6, the comprehensive diagnosis module fuses the information of the characteristics of the rectal lesion, lymph node metastasis and focus, and makes a discrimination by combining the data comparison of the medical pathology knowledge base:
S61, normalizing the suspicious lesion position information, suspicious lymph nodes and appearance characteristic information extracted by the characteristic extraction network to realize the consistency of data formats;
S62, eliminating semantic differences among different information through corresponding fusion networks, and realizing feature splicing and fusion;
S63, according to comparison with a corresponding clinical knowledge base, carrying out corresponding classification and discrimination on the integrated comprehensive information, and realizing comprehensive discrimination on the rectal cancer information;
S64, mapping the judgment result of the S63 back to the original enhanced CT image, and realizing corresponding TN labeling and visualization.
In step S11, 4 different tags are specified for the intestinal wall, the corresponding conditions of the stage and the tags are T0-label1, T1-label2, T2-label3, T3-label4 and T4-label5, two different labels are specified for lymph, and the corresponding conditions of the stage and the tags are normal lymph-label 6 and suspicious lymph-label 7.
The invention has the technical effects that:
(1) According to the invention, a brand new effective rectal cancer enhancement CT image data set is constructed from zero, pre-operation enhancement CT image data of rectal cancer in different stages and pathological (etc.) data information thereof are collected, and the image data is marked with cancerous regions, lymphatic metastasis regions and stage diagnosis information to establish the rectal cancer data set.
(2) The invention researches an image segmentation network based on a self-attention deep learning model to realize the discrimination of a rectum lesion area. The self-attention mechanism for calculating the response by utilizing the characteristic weighted sum of all the positions shows good performance on modeling global dependency, which is beneficial to capturing the integral internal correlation of the image data and the image characteristics, thereby improving the segmentation precision of the lesion area and being more beneficial to distinguishing the lesion area.
(3) The invention provides a deep learning model for identifying a weak and small lymph node target by using continuous multi-frame image annotation information. The continuous multi-frame lymph node labeling information of the same patient is correspondingly learned, and the multiple lymph node labeling information is fused by a self-adaptive method, so that the identification of the weak and small target of the metastatic lymph node is realized.
(4) The invention has the support of hospitals in image acquisition, and can obtain a large number of high-quality rectal cancer thin-layer enhanced CT images. Based on a knowledge base of pathology, the rectum image and the surrounding lymph image are considered during processing, and the rectum cancer is judged by utilizing the information of the rectum image and the surrounding lymph image, so that the accuracy of diagnosis is improved.
(5) The invention introduces man-machine interaction into the computer aided diagnosis device, and a doctor can interact with the device system of the invention during use. In the segmentation and detection stage, the doctor modifies the output result of the link more accurately. Rewards are given to the correctly classified results in the classification stage, and penalties are given to the incorrectly classified results. The system performs reinforcement learning according to the feedback result, and perfect performance is gradually achieved.
Drawings
FIG. 1 is a block diagram showing the constitution of a CT image rectal cancer stage auxiliary diagnosis device based on deep learning;
FIG. 2 is a file structure diagram of the dataset constructed in the work performing step S12 of the present invention;
FIG. 3 is a diagram showing a specific implementation form of the self-attention-based deep learning model in the work execution step S33 of the present invention;
FIG. 4 is a network block diagram of the self-attention based deep learning model in the work execution step S34 of the present invention;
Fig. 5 is a block diagram of suspicious lymphatic network based on sequence fusion in the operation execution step S43 according to the present invention;
fig. 6 is a schematic diagram of the visual output of the result in the operation execution step S64 according to the present invention.
Detailed Description
Referring to fig. 1, the invention provides a deep learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system, which comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discriminating module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module.
The device comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, judging whether a suspected rectal tumor (focus) shape area is contained in the CT image or not based on a self-attention deep learning model by a rectal lesion area judging module, and dividing a focus area;
S4, identifying periintestinal lesion lymph nodes according to the characteristics of the CT image lymph nodes by adopting a sequence self-adaptive characteristic fusion method by the lesion lymph identification module;
S5, a focus feature extraction module identifies and collects the outline feature data of the rectum tumor focus area pipe wall thickness, the density difference between the tumor and the normal pipe wall, whether burr-shaped protrusions exist and the adjacent structure is affected (or not) (focus), and counts the number of the periintestinal lesion lymph nodes to form a rectum lesion feature parameter set;
s6, the comprehensive diagnosis module fuses the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and realizes TN stage auxiliary diagnosis of the rectal cancer by combining TN stage priori knowledge base comparison;
The visualization module is used for displaying the input enhanced CT image and marking the rectal cancer change tube wall thickness, burr-shaped protrusions, adjacent structure involvement, lesion lymph node (and the like) information and TN stage results on the image.
Compared with the prior art, the device provided by the invention can rapidly and accurately identify the pathological change area of the rectal cancer and the metastasis situation of the pathological change lymph nodes, rapidly determine the clinical TN stage of the rectal cancer, provide intelligent screening reference for the rectal cancer CT image, effectively improve the comprehensive data acquisition precision efficiency before the rectal cancer operation, help doctors to find out the problem in time and determine the corresponding TN stage. Provides a more reliable scientific basis for standardized operation of clinicians.
Further, the enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discriminating module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module. Wherein:
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
The rectal lesion area judging module is used for judging a suspected rectal tumor area and dividing the area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
The focus feature extraction module is used for identifying and collecting feature data of the thickness of a rectal tumor focus area tube wall, the density difference between a tumor and a normal tube wall, whether burr-shaped protrusions exist and adjacent structures are affected (or not), counting the number of periintestinal lesion lymph nodes, and forming a rectum lesion feature parameter set;
The comprehensive diagnosis module is used for fusing the identification information of the rectal lesion area, the identification information of the lesion lymph node and the characteristics of the part to be checked, and combining with the comparison of TN stage priori knowledge base, the result of TN stage auxiliary diagnosis of the rectal cancer is given.
The visualization module is used for displaying the input enhanced CT image and marking information of a concentrated position of the rectum cancer (focus), the thickness of a wall of a lesion, burr protrusions around the focus, affected lesion lymph nodes (and the like) of an adjacent structure and TN (total length) stage results on the image;
the invention relates to a deep learning-based reinforced CT image rectal cancer stage auxiliary diagnosis system, which comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor (focus) shape area or not, and segments a focus area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
S5, a focus feature extraction module identifies and collects the outline feature data of the affected (or equal) (focus) focus and the focus feature data of the focus area wall thickness of the rectal tumor, the density difference between the tumor and the normal wall, the focus Zhou Maoci-shaped protrusion and the adjacent structure, and counts the number of the periintestinal lesion lymph nodes to form a rectum lesion feature parameter set;
And S6, (a comprehensive diagnosis module) fusing the identification information of the lesion area of the rectal tumor, the identification information of the lesion lymph node and the characteristic information of the lesion, and combining with a TN stage priori knowledge base (comparison), thereby realizing the clinical TN stage auxiliary diagnosis of the rectal cancer.
In step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectal cancer lesions and suspicious and normal lymph nodes around the intestines corresponding to different rectal cancer stages;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding necessary object information, and constructing a data set.
In step S2, image data preprocessing is employed, the steps including:
S21, uniformly converting the reinforced CT rectal cancer image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT image data format is consistent with the corresponding label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing image resampling, noise reduction, random affine transformation and channel number (and the like) adding operation on the image data and the label data which establish and have the mapping relation, so that image enhancement is realized, and the performance of a subsequent deep learning network model is improved conveniently.
Further, in step S3, the discrimination of the rectal lesion area discrimination module adopts a self-attention deep learning model, and utilizes a self-attention mechanism to construct a global dependency relationship of CT image features, and captures the overall internal correlation of image data and image features, thereby improving the segmentation precision of the lesion area and the discrimination precision of the lesion area. The method comprises the following steps of:
S31, inputting the preprocessed CT image into a feature extraction network, and obtaining a feature map of a corresponding depth level through one or more downsampling and pooling operations;
S32, introducing a self-attention mechanism, acquiring global feature information of the CT image, capturing a channel relation and improving feature representation capability, correspondingly compressing each channel of the feature map, acquiring importance of different channels through full-connection layers and activating function (or the like) operation after compression, converting the importance into attention vectors, better helping a model to distinguish a rectal wall area from a background organ, and facilitating measurement of rectal wall thickness;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature map, further guiding a network to focus on a focus area, and realizing the fusion of a focus mechanism, wherein the specific formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)
Wherein A is the corresponding dimension weight obtained after attention calculation, att () is the attention calculation operation function, X is the feature map data extracted by the feature extraction convolution network, and θ is the network parameter. Delta is ReLu activation function, which is used to provide nonlinear gating operation in the network, W1 and W2 correspond to two full-connection layers, feature dimension reduction is realized, GAP () is global average pooling operation function, and Y is feature (function value) of feature map X after being calculated by attention module.
The attention of the network is continuously adjusted, and the network is guided to pay further attention to the focus area in repeated tests so as to improve the discrimination capability of the network to the suspected focus area.
Further, in step S4, the identification of the diseased lymph node identification module adopts a method based on sequence self-adaptive feature fusion, and uses continuous multi-frame image labeling information to identify weak and small lymph node targets, so as to solve the problem of weak and small lymph node targets. Wherein the (possible) lymph node diagnosis processing steps based on the sequence adaptive feature fusion comprise:
(combining multiple sequence feature images into one feature image through a feature extraction network, and extracting voxel information, continuous change information among sequences and similarity information correspondingly)
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
S42, mapping (possibly) lymph node position (etc.) information in the feature map back to the original enhanced CT image, so as to realize positioning of suspicious lymph node positions in the original image.
Further, in step S6, the comprehensive diagnosis module fuses information of aspects of rectal lesions, lymph node metastasis, lesion characteristics (such as) and the like, and makes a judgment by combining data comparison of a medical pathology knowledge base:
S61, normalizing the suspicious lesion position information, suspicious lymph nodes and (lesion) appearance feature information extracted by the feature extraction network to realize the consistency of data formats;
S62, eliminating semantic differences among different information through corresponding fusion networks, and realizing feature splicing and fusion;
S63, according to comparison with a corresponding clinical knowledge base, carrying out corresponding classification and discrimination on the integrated comprehensive information, and realizing comprehensive discrimination on the rectal cancer information;
S64, mapping the judgment result of the S63 back to the original enhanced CT image, and realizing corresponding TN labeling and visualization.
The technical solutions of the embodiments of the present invention will be clearly and completely described below, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by persons skilled in the art without making creative efforts based on the embodiments of the present invention are included in the scope of protection of the present invention.
The invention provides a deep learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system, which is a device for taking a rectal CT image as an analysis object, and can rapidly and accurately give a focus range and obtain a rectal cancer stage result by constructing and training an image segmentation network based on a self-attention deep learning model and a deep learning model for identifying a weak and small lymph node target by using continuous multi-frame image marking information.
The technical principle is based on the fact that the normal rectal CT image and the CT image containing a focus area have differences in the characteristics of image gray scale, intestinal wall thickness and shape, size and morphology of the periintestinal lymph node and the like, cancers in different stages have certain differences in the characteristics of gray scale, shape and the like, and based on the differences, the CT image is effectively identified by utilizing a trained deep learning neural network through image characteristic extraction. If the T1 phase tumor invades the submucosa but does not affect the muscularis propria, the tumor appears as a localized reinforced focus within the submucosa, the focus is a soft tissue density tumor with clear boundary, no intestinal wall is thickened, and the outer edge of the intestinal wall is smooth and clear. The tumor in the T2 stage invades the intrinsic muscle layer, the tumor is limited to the intestinal wall, the important characteristic is that the external muscle layer is not invaded, the tumor is characterized in that the rectal intestinal wall has a locally reinforced focus, the tumor is different in size and is often in a split leaf shape and asymmetric, when the long axis of the tumor is consistent with the scanning layer, the intestinal wall is irregularly thickened in a tubular shape, stiff and narrow in the intestinal cavity, and the outer edge of the intestinal wall is smooth and clear. The T3 phase tumor grows to the surrounding rectocele through the external muscle layer, and the outer edge of the intestinal wall muscle layer is rough, and the diffusion of the tumor to the rectocele fat can be represented by the increase of needle-shaped density in the rectocele fat or the deformation and distortion of the intrinsic muscle layer, and a low-density ischemic necrosis area can be formed in the larger tumor. Stage T4 tumor invaginates surrounding structures such as peritoneal fold, pelvic wall, vagina, prostate, bladder or seminal vesicles, and is characterized by increased density, spots, streak-like shadows and soft tissue shadows appear in surrounding adipose tissue, or fat between adjacent organs is gapless, and a low-density ischemic necrosis zone can appear in larger tumors. N stage indicates that the short diameter of lymph nodes is more than 5mm, the boundary is fuzzy, the density is uneven, the strengthening performance is enhanced, and more than or equal to 3 clustered lymph nodes are used as positive standards. These are characterized by differences in image characteristics, mainly in texture characteristics and shape characteristics. The texture features are formed by distributing gray features according to a certain rule, the areas with the same texture features can be expressed as shape features, whether the suspicious focus areas are contained or not can be distinguished according to the difference of the texture features, particularly the texture features at the shape boundaries, and the focus types can be further identified according to the texture features and the shape feature differences.
Based on the above recognition basis, the present invention provides an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning, as shown in fig. 1. The system comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectum lesion area judging module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module, wherein:
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
the rectum lesion area judging module is used for judging a suspected rectum tumor lesion area and dividing the lesion area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
The focus feature extraction module is used for identifying and collecting (focus) outline feature data such as the thickness of a rectal tumor focus region pipe wall, the density difference between a tumor and a normal pipe wall, the presence or absence of burr-shaped protrusions around a focus, the presence or absence of adjacent structure involvement and the like, and counting the number of pathological lymph nodes around the intestine to form a rectum pathological feature parameter set;
The comprehensive diagnosis module is used for fusing identification information of a rectum lesion area, identification information of lesion lymph nodes and lesion characteristics, and combining TN stage priori knowledge base comparison to give a result of TN stage auxiliary diagnosis of the rectum cancer.
The visualization module is used for displaying the input enhanced CT image and marking the thickness of the wall of the rectal focus, the Zhou Maoci-shaped protrusions of the kitchen range, the affected adjacent structure, the lesion lymph node focus information and TN stage results on the image;
Further, the operation and execution of the rectal cancer stage auxiliary diagnosis system comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor (focus) shape area or not, and segments a focus area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
s5, a focus feature extraction module identifies and collects outline feature data of the wall thickness, burr protrusion and the like (focus) of a focus region of the rectal tumor, counts the number of periintestinal lesion lymph nodes and forms a rectum lesion feature parameter set;
And S6, (a comprehensive diagnosis module) fusing the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and combining with a TN stage priori knowledge base (comparison), so as to realize the TN stage auxiliary diagnosis of the rectal cancer.
In step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectum sections and periintestinal lymph corresponding to different rectal cancer stages, wherein 4 different labels are defined for the intestinal wall, and the stage and the label correspond to the conditions of T0-label1, T1-label2, T2-label3, T3-label4 and T4-label5. Two different labels are regulated for lymph, and the states correspond to the labels, namely normal lymph-label 6 and suspicious lymph-label 7;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding patient information, and constructing a data set, wherein the data set file structure is shown in figure 2.
Further, in step S2, the image data preprocessing step includes:
s21, uniformly converting the enhanced CT image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT rectal cancer image data format is consistent with the corresponding labeling label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing operations such as image resampling, noise reduction, random affine transformation, channel number addition in image size and the like on the image data and the label data which establish the mapping relation, so that image enhancement is realized, and the deep learning network performance is convenient to improve.
Further, in step S3, the step of determining the suspicious lesion area based on the self-attention machine deep learning model includes:
S31, firstly inputting the preprocessed data into a feature extraction convolution network, and obtaining a feature map of a corresponding depth level after feature extraction of the enhanced CT image data is completed through a series of downsampling and pooling operations;
S32, introducing a self-attention mechanism, acquiring global feature information of the CT image and capturing a channel relation, and improving feature representation capability. Each channel of the feature map is correspondingly compressed, importance of different channels is obtained through operations such as a full connection layer, an activation function and the like after compression, and the importance is converted into an attention vector, so that a model can be better helped to distinguish a rectal wall area from a background organ, and rectal wall thickness measurement is facilitated;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature map, further guiding the network to focus on a focus area, and realizing the fusion of a focus mechanism. The formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)
Wherein A is the corresponding dimension weight obtained after attention calculation, att () is the attention calculation operation function, X is the feature map data extracted by the feature extraction convolution network, and θ is the network parameter. Delta is ReLu activation function, which is used to provide nonlinear gating operation in the network, W1 and W2 correspond to two full-connection layers, feature dimension reduction is realized, GAP () is global average pooling operation function, and Y is feature (function value) of feature map X after being calculated by attention module.
The specific implementation form of the self-attention model is shown in fig. 3 [ in the diagram, the size of a feature diagram obtained by the feature extraction network is n×h×w, where N is the channel dimension of the feature diagram, H is the height of the feature diagram, and W is the width of the feature diagram. And B, obtaining an N-dimensional channel dimension weight vector after the feature map is subjected to global pooling operation. C is a characteristic diagram after full connection and ReLu activation functions, D is a characteristic diagram after C is subjected to attention calculation and is adjusted to be NxH2;
the attention of the network is continuously adjusted, and the network is guided to pay further attention to the focus area in repeated training so as to improve the discrimination capability of the network to the rectal tumor focus area. Its overall deep learning network block diagram is shown in fig. 4.
Further, in step S4, the suspicious lymph node identification processing step based on the sequence adaptive feature fusion includes:
(merging multiple sequence feature images into one feature image through a feature extraction convolution network, extracting voxel information, continuous change information among sequences, similarity and other features)
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
S42, mapping the information such as the position of the lymph node in the feature map back to the original CT image, so that the position of the suspicious lymph node in the original image is positioned. A block diagram of suspicious lymph identification network based on sequence fusion is shown in fig. 5.
Further, in step S6, the comprehensive diagnosis method fuses the information of the rectum lesion, the lymph metastasis, the lesion feature and other aspects, and combines the medical pathology information base and the TN stage priori knowledge base to make the discrimination steps as follows:
s61, normalizing the suspicious lesion position information and suspicious lymph node information extracted by the feature extraction convolutional network to achieve consistency of data formats;
s62, eliminating semantic differences among different information through corresponding fusion networks, realizing feature splicing and fusion, and fusing lesion lymph node identification information and (focus) appearance feature information;
S63, comparing with a TN stage priori knowledge base, and correspondingly classifying and distinguishing the integrated information to realize the clinical TN stage auxiliary diagnosis of the rectal cancer;
s64, mapping the diagnosis result back to the original CT image, and realizing corresponding TN stage labeling and visual display. The resulting visual annotation is shown in fig. 6.
Specifically, the whole network framework is trained by a three-stage subnet network.
The first subnetwork is training of a rectal suspected lesion area discrimination network. And training the tumor lesion area discrimination network by using the enhanced CT image training sample set in the image library through the training segmentation network. And training the number of training samples by adopting a batch size 16,32,64,128,256 and the like, and intercepting the parameter with the best training result as the final super parameter. The data set is divided into a training set, a verification set and a test set, and the specific dividing ratio is 6:2:2. In the training process, along with the increase of training iteration times, the accuracy of the verification set is continuously improved, but the test set has a trend of rising and then falling, namely the network has a trend of over fitting. And analyzing the performance of the test set, and performing fine adjustment of 3% of the upper and lower amplitude of the network to obtain the optimal performance point for the diagnosis of the suspected focus. And saving the parameters and the network weights, and ending the first sub-network training.
The second subnetwork is a suspicious lymph node identification network based on sequence-adaptive features. The network initially trains accordingly with a single frame of suspicious lymph node data. And training the number of training samples by adopting a batch size 16,32,64,128,256 and the like, and intercepting the parameter with the best training result as the final super parameter. The data set dividing ratio is the same as that of the first sub-network dividing form, the training form is similar, after a single-frame picture training model result is obtained, the network structure is further improved, multi-frame sequence information is fused for training, the final experimental accuracy is improved by 3% compared with the network accuracy utilizing the single-frame information, and the parameters are further fine-tuned, so that better performance is obtained.
And the third network framework carries out joint training on the two sub-networks, and carries out comprehensive stage diagnosis on the rectum cancer by utilizing a clinical medicine priori knowledge base through operations such as a full connection layer and the like. And after the combined training, training set data are further utilized to finely adjust the whole network frame, so that the recognition accuracy is improved by 2%.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.

Claims (10)

1. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system is characterized by comprising an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discrimination module, a lesion lymph identification module, a lesion feature extraction module, a comprehensive diagnosis module and a visualization module;
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectal enhanced CT image input by the enhanced CT image input module and enhancing a CT rectal cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
The rectum lesion area judging module adopts a self-attention deep learning model to judge, utilizes a self-attention mechanism to construct a global dependency relationship of CT image characteristics, captures the integral internal correlation of image data and image characteristics, is used for judging a suspected rectum tumor area and is used for dividing the area;
The identification of the lesion lymph identification module adopts a method based on sequence self-adaptive feature fusion, and the continuous multi-frame image marking information is utilized to identify the target of the weak lymph node for identifying the periintestinal lesion lymph node;
the focus feature extraction module is used for identifying and collecting feature data of a focus region of the rectal tumor, counting the number of periintestinal lesion lymph nodes and forming a rectal lesion feature parameter set;
The comprehensive diagnosis module is used for fusing information of a plurality of aspects of rectal lesions, lymph node metastasis and lesion characteristics and making discrimination by combining data comparison of a medical pathology knowledge base;
The visualization module is used for displaying the input enhanced CT image and labeling the relevant information of the characteristics of the tumor focus area of the rectum on the image.
2. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system of claim 1, wherein the focus feature extraction module identifies and collects the feature parameters of the focus region of the rectal tumor, including the thickness of the wall of the focus region of the rectal tumor, the density difference between the tumor and the normal wall, whether there are burr-like protrusions and adjacent structures.
3. The enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning of claim 2, wherein the visualization module is used for displaying the relevant information of the characteristics of the focal region of the rectal tumor marked on the image, and the relevant information comprises marking the rectal cancer concentrated position, the wall thickness of the lesion, the burr protrusion around the lesion, the affected adjacent structure, the lesion lymph node information and TN stage results on the image.
4. A deep learning based enhanced CT image rectal cancer staging aid diagnostic system according to claim 2 or 3, characterized in that the working execution of the diagnostic system comprises the steps of:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
S3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor shape area or not, and segments a lesion area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
s5, extracting the thickness of the wall of a rectal tumor focus area, the density difference between a tumor and a normal wall, the affected appearance characteristics of a kitchen Zhou Maoci-shaped protuberance and an adjacent structure by a focus characteristic extraction module, and counting the number of periintestinal lesion lymph nodes to form a rectal lesion characteristic parameter set;
S6, the comprehensive diagnosis module fuses the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and realizes the clinical TN stage auxiliary diagnosis of the rectal cancer by combining TN stage priori knowledge base comparison.
5. The advanced learning based enhanced CT image rectal cancer stage-by-stage auxiliary diagnostic system according to claim 4, wherein:
In step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectal cancer lesions and suspicious and normal lymph nodes around the intestines corresponding to different rectal cancer stages;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
s13, storing the annotation file and the original image file correspondingly, storing corresponding object information, and constructing a data set.
6. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system according to claim 4, wherein in step S2, image data preprocessing is adopted, and the steps include:
S21, uniformly converting the reinforced CT rectal cancer image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT image data format is consistent with the corresponding label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing image resampling, noise reduction, random affine transformation and image channel number addition on the image data and the label data which are established and mapped, and realizing image enhancement.
7. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system according to claim 4, wherein in step S3, the rectal lesion region discriminating module constructs a global dependency relationship of CT image features by using a self-attention mechanism, captures the overall internal correlation of image data and image features, wherein the step of discriminating a suspected lesion region based on the self-attention mechanism is as follows:
S31, inputting the preprocessed CT image into a feature extraction network, and obtaining a feature map of a corresponding depth level through one or more downsampling and pooling operations;
S32, introducing an attention mechanism, performing corresponding compression on each channel of the feature map, and after compression, obtaining importance degrees corresponding to different channels through the operation of an activation function and converting the importance degrees into attention vectors;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature diagram, and further guiding the attention point of the network to realize the fusion of attention mechanisms;
the specific formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X))), (1)
Y=AX, (2)
wherein A is the corresponding dimension weight obtained after attention calculation;
Att () is an attention calculation operation function, and X is feature map data extracted by a feature extraction convolution network;
θ is a network parameter, δ is ReLu activation functions to provide nonlinear gating operations in the network;
W1 and W2 correspond to two full-connection layers, feature dimension reduction is realized, and GAP () is a global average pooling operation function;
Y is the feature of the feature map X after the attention module calculates.
8. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system according to claim 4, wherein in step S4, the lymph node diagnosis processing step based on the fusion of the sequence adaptive features comprises:
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
s42, the lymph node position information in the feature map is mapped back to the original enhanced CT image.
9. The advanced learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system according to claim 4, wherein in step S6, the comprehensive diagnosis module is used for fusing rectal lesion area discrimination information, lesion lymph node identification information and examination object part characteristics, and combining TN stage priori knowledge base comparison to give an rectal cancer TN stage auxiliary diagnosis result:
S61, normalizing the suspicious lesion position information, suspicious lymph nodes and appearance characteristic information extracted by the characteristic extraction network to realize the consistency of data formats;
S62, eliminating semantic differences among different information through corresponding fusion networks, and realizing feature splicing and fusion;
S63, according to comparison with a corresponding clinical knowledge base, carrying out corresponding classification and discrimination on the integrated comprehensive information, and realizing comprehensive discrimination on the rectal cancer information;
S64, mapping the judgment result of the S63 back to the original enhanced CT image, and realizing corresponding TN labeling and visualization.
10. The reinforced CT image rectal cancer stage auxiliary diagnosis system based on deep learning according to claim 5, wherein in step S11, 4 different labels are specified for the intestinal wall, the corresponding conditions of the stage and the labels are T0-label1, T1-label2, T2-label3, T3-label4 and T4-label5, two different labels are specified for lymph, and the corresponding states of the two labels are normal lymph-label 6 and suspicious lymph-label 7.
CN202210128818.6A 2022-02-11 2022-02-11 Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning Active CN114782307B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210128818.6A CN114782307B (en) 2022-02-11 2022-02-11 Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210128818.6A CN114782307B (en) 2022-02-11 2022-02-11 Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning

Publications (2)

Publication Number Publication Date
CN114782307A CN114782307A (en) 2022-07-22
CN114782307B true CN114782307B (en) 2025-03-25

Family

ID=82423112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210128818.6A Active CN114782307B (en) 2022-02-11 2022-02-11 Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning

Country Status (1)

Country Link
CN (1) CN114782307B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115274099B (en) * 2022-09-26 2022-12-30 之江实验室 Human-intelligent interactive computer-aided diagnosis system and method
CN115760868A (en) * 2022-10-14 2023-03-07 广东省人民医院 Colorectal and colorectal cancer segmentation method, system, device and medium based on topology perception
CN115588504A (en) * 2022-10-28 2023-01-10 大连大学附属中山医院 A monitoring and management system based on molecular imaging technology
CN116386902B (en) * 2023-04-24 2023-12-19 北京透彻未来科技有限公司 Artificial intelligent auxiliary pathological diagnosis system for colorectal cancer based on deep learning
CN116342859B (en) * 2023-05-30 2023-08-18 安徽医科大学第一附属医院 A method and system for identifying lung tumor regions based on imaging features
WO2025073099A1 (en) * 2023-10-07 2025-04-10 深圳先进技术研究院 Lymph node metastasis staging prediction method and apparatus, computer device, and storage medium
CN117708706B (en) * 2024-02-06 2024-05-28 山东未来网络研究院(紫金山实验室工业互联网创新应用基地) Method and system for classifying breast tumors by enhancing and selecting end-to-end characteristics
CN118136237B (en) * 2024-03-20 2024-11-01 中国医学科学院肿瘤医院 Esophageal cancer screening system and method based on image processing
CN117994255B (en) * 2024-04-03 2024-06-07 中国人民解放军总医院第六医学中心 Anal fissure detecting system based on deep learning
CN118334439A (en) * 2024-04-26 2024-07-12 北京安德医智科技有限公司 A processing method and device for N-stage classification prediction based on CT images
CN119130916B (en) * 2024-08-14 2025-05-30 北京透彻未来科技有限公司 Universal organ lymph node metastasis analysis system based on joint deep learning model
CN119762447A (en) * 2024-12-11 2025-04-04 靖江市人民医院 Intelligent diagnosis system and method for early liver cancer based on multi-phase enhanced CT
CN119324034B (en) * 2024-12-19 2025-06-24 南昌大学第一附属医院 An image-based auxiliary diagnosis method and system for rectal cancer

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472629A (en) * 2019-08-14 2019-11-19 青岛大学附属医院 A kind of pathological image automatic recognition system and its training method based on deep learning
CN112132917A (en) * 2020-08-27 2020-12-25 盐城工学院 Intelligent diagnosis method for rectal cancer lymph node metastasis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651935A (en) * 2020-12-23 2021-04-13 苏州普瑞斯仁信息科技有限公司 Automatic staging system for tumor lymph nodes
CN112991295B (en) * 2021-03-12 2023-04-07 中国科学院自动化研究所 Lymph node metastasis image analysis system, method and equipment based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472629A (en) * 2019-08-14 2019-11-19 青岛大学附属医院 A kind of pathological image automatic recognition system and its training method based on deep learning
CN112132917A (en) * 2020-08-27 2020-12-25 盐城工学院 Intelligent diagnosis method for rectal cancer lymph node metastasis

Also Published As

Publication number Publication date
CN114782307A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN114782307B (en) Enhanced CT imaging rectal cancer staging auxiliary diagnosis system based on deep learning
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
CN108537773B (en) A method for intelligently assisted identification of pancreatic cancer and pancreatic inflammatory diseases
Yousef et al. A holistic overview of deep learning approach in medical imaging
Chen et al. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet
Mokni et al. An automatic Computer-Aided Diagnosis system based on the Multimodal fusion of Breast Cancer (MF-CAD)
Jin et al. Artificial intelligence in radiology
CN107133638B (en) Multi-parameter MRI prostate cancer CAD method and system based on two classifiers
CN111709950B (en) Mammary gland molybdenum target AI auxiliary screening method
Singh et al. Radiological diagnosis of chronic liver disease and hepatocellular carcinoma: a review
JP2014508021A (en) Medical device for examining the neck
CN101103924A (en) Breast cancer computer-aided diagnosis method and system based on mammography
KR102620046B1 (en) Method and system for breast ultrasonic image diagnosis using weakly-supervised deep learning artificial intelligence
CN116630680B (en) Dual-mode image classification method and system combining X-ray photography and ultrasound
Qian et al. ProCDet: A new method for prostate cancer detection based on MR images
Nguyen-Tat et al. Enhancing brain tumor segmentation in MRI images: A hybrid approach using UNet, attention mechanisms, and transformers
Liu et al. Automated classification of cervical lymph-node-level from ultrasound using Depthwise Separable Convolutional Swin Transformer
Jian et al. HRU-Net: A high-resolution convolutional neural network for esophageal cancer radiotherapy target segmentation
Li et al. A dual attention-guided 3D convolution network for automatic segmentation of prostate and tumor
Rocha et al. Stern: Attention-driven spatial transformer network for abnormality detection in chest x-ray images
Saranya et al. A dense kernel point convolutional neural network for chronic liver disease classification with hybrid chaotic slime mould and giant trevally optimizer
Moglia et al. Deep learning for pancreas segmentation on computed tomography: a systematic review
CN114757894A (en) A system for analyzing bone tumor lesions
CN115375632A (en) Lung nodule intelligent detection system and method based on CenterNet model
Tufail et al. Extraction of region of interest from brain MRI by converting images into neutrosophic domain using the modified S-function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230104

Address after: 230000 81 Meishan Road, Hefei City, Anhui Province

Applicant after: ANHUI MEDICAL University

Applicant after: Artificial Intelligence Research Institute of Hefei comprehensive national science center (Artificial Intelligence Laboratory of Anhui Province)

Applicant after: The First Affiliated Hospital of Anhui Medical University

Applicant after: University of Science and Technology of China

Address before: 230000 no.218 Jixi Road, Hefei, Anhui Province

Applicant before: The First Affiliated Hospital of Anhui Medical University

Applicant before: University of Science and Technology of China

GR01 Patent grant
GR01 Patent grant