Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning, which is also an auxiliary diagnosis device, and aims at solving the problems that the structure of rectal enhanced CT image data is complex, a cancerous region and stage thereof are difficult to distinguish, researching the enhancement CT image rectal cancer data annotation and data set construction, discriminating the rectal cancer lesion region based on a self-attention deep learning model and identifying the metastatic lymph node based on sequence self-adaptive feature fusion, designing and realizing the rectal cancer stage intelligent auxiliary diagnosis system, carrying out clinical application experiment verification and improving the comprehensive data acquisition precision efficiency before rectal cancer operation.
In order to achieve the aim, the technical scheme adopted by the invention is that the enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectum lesion area discrimination module, a lesion lymph identification module, a lesion feature extraction module, a comprehensive diagnosis module and a visualization module;
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
The rectal lesion area judging module is used for judging a suspected rectal tumor area and dividing the area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
the focus feature extraction module is used for identifying and collecting feature data of a focus region of the rectal tumor, counting the number of periintestinal lesion lymph nodes and forming a rectal lesion feature parameter set;
The comprehensive diagnosis module is used for fusing identification information of a rectum lesion area, identification information of a lesion lymph node and characteristics of a part to be checked, and combining TN stage priori knowledge base comparison to give a result of TN stage auxiliary diagnosis of the rectum cancer;
The visualization module is used for displaying the input enhanced CT image and labeling the relevant information of the characteristics of the tumor focus area of the rectum on the image.
Further, the focus feature extraction module identifies and collects the feature parameters of the focus region of the rectal tumor, including the thickness of the wall of the focus region of the rectal tumor, the density difference between the tumor and the normal wall, whether burr-shaped protrusions exist and adjacent structures are affected.
Still further, the visualization module is used for displaying the relevant information of the feature of the focal region marked with the rectal cancer on the image, and the relevant information comprises the concentrated position of the rectal cancer, the thickness of the wall of the lesion, the burr protrusion around the lesion, the affected adjacent structure, the lesion lymph node information and TN (total length) stage results marked on the image.
Still further, the system work execution includes the steps of:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
S3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor shape area or not, and segments a lesion area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
s5, extracting the thickness of the wall of a rectal tumor focus area, the density difference between a tumor and a normal wall, the affected appearance characteristics of a kitchen Zhou Maoci-shaped protuberance and an adjacent structure by a focus characteristic extraction module, and counting the number of periintestinal lesion lymph nodes to form a rectal lesion characteristic parameter set;
S6, the comprehensive diagnosis module fuses the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and realizes the clinical TN stage auxiliary diagnosis of the rectal cancer by combining TN stage priori knowledge base comparison.
Still further, in step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectal cancer lesions and suspicious and normal lymph nodes around the intestines corresponding to different rectal cancer stages;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding necessary object information, and constructing a data set.
Still further, in step S2, image data preprocessing is adopted, and the steps include:
S21, uniformly converting the reinforced CT rectal cancer image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT image data format is consistent with the corresponding label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing image resampling, noise reduction, random affine transformation and image channel number addition on the image data and the label data which are established and mapped, and realizing image enhancement.
Still further, in step S3, the discriminating module of the rectal lesion area adopts a self-attention deep learning model, and utilizes a self-attention mechanism to construct a global dependency relationship of CT image features, and captures the overall internal correlation of image data and image features, wherein the discriminating step of the suspected lesion area based on the self-attention mechanism is as follows:
S31, inputting the preprocessed CT image into a feature extraction network, and obtaining a feature map of a corresponding depth level through one or more downsampling and pooling operations;
S32, introducing an attention mechanism, performing corresponding compression on each channel of the feature map, and after compression, obtaining importance degrees corresponding to different channels through the operation of an activation function and converting the importance degrees into attention vectors;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature diagram, and further guiding the attention point of the network to realize the fusion of attention mechanisms;
the specific formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)。
Still further, in step S4, the identifying of the diseased lymph node identifying module adopts a method based on sequence adaptive feature fusion, and uses continuous multi-frame image labeling information to identify the weak lymph node target, wherein the lymph node diagnosis processing step based on sequence adaptive feature fusion includes:
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
s42, the lymph node position information in the feature map is mapped back to the original enhanced CT image.
Still further, in step S6, the comprehensive diagnosis module fuses the information of the characteristics of the rectal lesion, lymph node metastasis and focus, and makes a discrimination by combining the data comparison of the medical pathology knowledge base:
S61, normalizing the suspicious lesion position information, suspicious lymph nodes and appearance characteristic information extracted by the characteristic extraction network to realize the consistency of data formats;
S62, eliminating semantic differences among different information through corresponding fusion networks, and realizing feature splicing and fusion;
S63, according to comparison with a corresponding clinical knowledge base, carrying out corresponding classification and discrimination on the integrated comprehensive information, and realizing comprehensive discrimination on the rectal cancer information;
S64, mapping the judgment result of the S63 back to the original enhanced CT image, and realizing corresponding TN labeling and visualization.
In step S11, 4 different tags are specified for the intestinal wall, the corresponding conditions of the stage and the tags are T0-label1, T1-label2, T2-label3, T3-label4 and T4-label5, two different labels are specified for lymph, and the corresponding conditions of the stage and the tags are normal lymph-label 6 and suspicious lymph-label 7.
The invention has the technical effects that:
(1) According to the invention, a brand new effective rectal cancer enhancement CT image data set is constructed from zero, pre-operation enhancement CT image data of rectal cancer in different stages and pathological (etc.) data information thereof are collected, and the image data is marked with cancerous regions, lymphatic metastasis regions and stage diagnosis information to establish the rectal cancer data set.
(2) The invention researches an image segmentation network based on a self-attention deep learning model to realize the discrimination of a rectum lesion area. The self-attention mechanism for calculating the response by utilizing the characteristic weighted sum of all the positions shows good performance on modeling global dependency, which is beneficial to capturing the integral internal correlation of the image data and the image characteristics, thereby improving the segmentation precision of the lesion area and being more beneficial to distinguishing the lesion area.
(3) The invention provides a deep learning model for identifying a weak and small lymph node target by using continuous multi-frame image annotation information. The continuous multi-frame lymph node labeling information of the same patient is correspondingly learned, and the multiple lymph node labeling information is fused by a self-adaptive method, so that the identification of the weak and small target of the metastatic lymph node is realized.
(4) The invention has the support of hospitals in image acquisition, and can obtain a large number of high-quality rectal cancer thin-layer enhanced CT images. Based on a knowledge base of pathology, the rectum image and the surrounding lymph image are considered during processing, and the rectum cancer is judged by utilizing the information of the rectum image and the surrounding lymph image, so that the accuracy of diagnosis is improved.
(5) The invention introduces man-machine interaction into the computer aided diagnosis device, and a doctor can interact with the device system of the invention during use. In the segmentation and detection stage, the doctor modifies the output result of the link more accurately. Rewards are given to the correctly classified results in the classification stage, and penalties are given to the incorrectly classified results. The system performs reinforcement learning according to the feedback result, and perfect performance is gradually achieved.
Detailed Description
Referring to fig. 1, the invention provides a deep learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system, which comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discriminating module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module.
The device comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, judging whether a suspected rectal tumor (focus) shape area is contained in the CT image or not based on a self-attention deep learning model by a rectal lesion area judging module, and dividing a focus area;
S4, identifying periintestinal lesion lymph nodes according to the characteristics of the CT image lymph nodes by adopting a sequence self-adaptive characteristic fusion method by the lesion lymph identification module;
S5, a focus feature extraction module identifies and collects the outline feature data of the rectum tumor focus area pipe wall thickness, the density difference between the tumor and the normal pipe wall, whether burr-shaped protrusions exist and the adjacent structure is affected (or not) (focus), and counts the number of the periintestinal lesion lymph nodes to form a rectum lesion feature parameter set;
s6, the comprehensive diagnosis module fuses the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and realizes TN stage auxiliary diagnosis of the rectal cancer by combining TN stage priori knowledge base comparison;
The visualization module is used for displaying the input enhanced CT image and marking the rectal cancer change tube wall thickness, burr-shaped protrusions, adjacent structure involvement, lesion lymph node (and the like) information and TN stage results on the image.
Compared with the prior art, the device provided by the invention can rapidly and accurately identify the pathological change area of the rectal cancer and the metastasis situation of the pathological change lymph nodes, rapidly determine the clinical TN stage of the rectal cancer, provide intelligent screening reference for the rectal cancer CT image, effectively improve the comprehensive data acquisition precision efficiency before the rectal cancer operation, help doctors to find out the problem in time and determine the corresponding TN stage. Provides a more reliable scientific basis for standardized operation of clinicians.
Further, the enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectal lesion area discriminating module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module. Wherein:
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
The rectal lesion area judging module is used for judging a suspected rectal tumor area and dividing the area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
The focus feature extraction module is used for identifying and collecting feature data of the thickness of a rectal tumor focus area tube wall, the density difference between a tumor and a normal tube wall, whether burr-shaped protrusions exist and adjacent structures are affected (or not), counting the number of periintestinal lesion lymph nodes, and forming a rectum lesion feature parameter set;
The comprehensive diagnosis module is used for fusing the identification information of the rectal lesion area, the identification information of the lesion lymph node and the characteristics of the part to be checked, and combining with the comparison of TN stage priori knowledge base, the result of TN stage auxiliary diagnosis of the rectal cancer is given.
The visualization module is used for displaying the input enhanced CT image and marking information of a concentrated position of the rectum cancer (focus), the thickness of a wall of a lesion, burr protrusions around the focus, affected lesion lymph nodes (and the like) of an adjacent structure and TN (total length) stage results on the image;
the invention relates to a deep learning-based reinforced CT image rectal cancer stage auxiliary diagnosis system, which comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor (focus) shape area or not, and segments a focus area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
S5, a focus feature extraction module identifies and collects the outline feature data of the affected (or equal) (focus) focus and the focus feature data of the focus area wall thickness of the rectal tumor, the density difference between the tumor and the normal wall, the focus Zhou Maoci-shaped protrusion and the adjacent structure, and counts the number of the periintestinal lesion lymph nodes to form a rectum lesion feature parameter set;
And S6, (a comprehensive diagnosis module) fusing the identification information of the lesion area of the rectal tumor, the identification information of the lesion lymph node and the characteristic information of the lesion, and combining with a TN stage priori knowledge base (comparison), thereby realizing the clinical TN stage auxiliary diagnosis of the rectal cancer.
In step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectal cancer lesions and suspicious and normal lymph nodes around the intestines corresponding to different rectal cancer stages;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding necessary object information, and constructing a data set.
In step S2, image data preprocessing is employed, the steps including:
S21, uniformly converting the reinforced CT rectal cancer image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT image data format is consistent with the corresponding label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing image resampling, noise reduction, random affine transformation and channel number (and the like) adding operation on the image data and the label data which establish and have the mapping relation, so that image enhancement is realized, and the performance of a subsequent deep learning network model is improved conveniently.
Further, in step S3, the discrimination of the rectal lesion area discrimination module adopts a self-attention deep learning model, and utilizes a self-attention mechanism to construct a global dependency relationship of CT image features, and captures the overall internal correlation of image data and image features, thereby improving the segmentation precision of the lesion area and the discrimination precision of the lesion area. The method comprises the following steps of:
S31, inputting the preprocessed CT image into a feature extraction network, and obtaining a feature map of a corresponding depth level through one or more downsampling and pooling operations;
S32, introducing a self-attention mechanism, acquiring global feature information of the CT image, capturing a channel relation and improving feature representation capability, correspondingly compressing each channel of the feature map, acquiring importance of different channels through full-connection layers and activating function (or the like) operation after compression, converting the importance into attention vectors, better helping a model to distinguish a rectal wall area from a background organ, and facilitating measurement of rectal wall thickness;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature map, further guiding a network to focus on a focus area, and realizing the fusion of a focus mechanism, wherein the specific formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)
Wherein A is the corresponding dimension weight obtained after attention calculation, att () is the attention calculation operation function, X is the feature map data extracted by the feature extraction convolution network, and θ is the network parameter. Delta is ReLu activation function, which is used to provide nonlinear gating operation in the network, W1 and W2 correspond to two full-connection layers, feature dimension reduction is realized, GAP () is global average pooling operation function, and Y is feature (function value) of feature map X after being calculated by attention module.
The attention of the network is continuously adjusted, and the network is guided to pay further attention to the focus area in repeated tests so as to improve the discrimination capability of the network to the suspected focus area.
Further, in step S4, the identification of the diseased lymph node identification module adopts a method based on sequence self-adaptive feature fusion, and uses continuous multi-frame image labeling information to identify weak and small lymph node targets, so as to solve the problem of weak and small lymph node targets. Wherein the (possible) lymph node diagnosis processing steps based on the sequence adaptive feature fusion comprise:
(combining multiple sequence feature images into one feature image through a feature extraction network, and extracting voxel information, continuous change information among sequences and similarity information correspondingly)
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
S42, mapping (possibly) lymph node position (etc.) information in the feature map back to the original enhanced CT image, so as to realize positioning of suspicious lymph node positions in the original image.
Further, in step S6, the comprehensive diagnosis module fuses information of aspects of rectal lesions, lymph node metastasis, lesion characteristics (such as) and the like, and makes a judgment by combining data comparison of a medical pathology knowledge base:
S61, normalizing the suspicious lesion position information, suspicious lymph nodes and (lesion) appearance feature information extracted by the feature extraction network to realize the consistency of data formats;
S62, eliminating semantic differences among different information through corresponding fusion networks, and realizing feature splicing and fusion;
S63, according to comparison with a corresponding clinical knowledge base, carrying out corresponding classification and discrimination on the integrated comprehensive information, and realizing comprehensive discrimination on the rectal cancer information;
S64, mapping the judgment result of the S63 back to the original enhanced CT image, and realizing corresponding TN labeling and visualization.
The technical solutions of the embodiments of the present invention will be clearly and completely described below, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by persons skilled in the art without making creative efforts based on the embodiments of the present invention are included in the scope of protection of the present invention.
The invention provides a deep learning-based enhanced CT image rectal cancer stage auxiliary diagnosis system, which is a device for taking a rectal CT image as an analysis object, and can rapidly and accurately give a focus range and obtain a rectal cancer stage result by constructing and training an image segmentation network based on a self-attention deep learning model and a deep learning model for identifying a weak and small lymph node target by using continuous multi-frame image marking information.
The technical principle is based on the fact that the normal rectal CT image and the CT image containing a focus area have differences in the characteristics of image gray scale, intestinal wall thickness and shape, size and morphology of the periintestinal lymph node and the like, cancers in different stages have certain differences in the characteristics of gray scale, shape and the like, and based on the differences, the CT image is effectively identified by utilizing a trained deep learning neural network through image characteristic extraction. If the T1 phase tumor invades the submucosa but does not affect the muscularis propria, the tumor appears as a localized reinforced focus within the submucosa, the focus is a soft tissue density tumor with clear boundary, no intestinal wall is thickened, and the outer edge of the intestinal wall is smooth and clear. The tumor in the T2 stage invades the intrinsic muscle layer, the tumor is limited to the intestinal wall, the important characteristic is that the external muscle layer is not invaded, the tumor is characterized in that the rectal intestinal wall has a locally reinforced focus, the tumor is different in size and is often in a split leaf shape and asymmetric, when the long axis of the tumor is consistent with the scanning layer, the intestinal wall is irregularly thickened in a tubular shape, stiff and narrow in the intestinal cavity, and the outer edge of the intestinal wall is smooth and clear. The T3 phase tumor grows to the surrounding rectocele through the external muscle layer, and the outer edge of the intestinal wall muscle layer is rough, and the diffusion of the tumor to the rectocele fat can be represented by the increase of needle-shaped density in the rectocele fat or the deformation and distortion of the intrinsic muscle layer, and a low-density ischemic necrosis area can be formed in the larger tumor. Stage T4 tumor invaginates surrounding structures such as peritoneal fold, pelvic wall, vagina, prostate, bladder or seminal vesicles, and is characterized by increased density, spots, streak-like shadows and soft tissue shadows appear in surrounding adipose tissue, or fat between adjacent organs is gapless, and a low-density ischemic necrosis zone can appear in larger tumors. N stage indicates that the short diameter of lymph nodes is more than 5mm, the boundary is fuzzy, the density is uneven, the strengthening performance is enhanced, and more than or equal to 3 clustered lymph nodes are used as positive standards. These are characterized by differences in image characteristics, mainly in texture characteristics and shape characteristics. The texture features are formed by distributing gray features according to a certain rule, the areas with the same texture features can be expressed as shape features, whether the suspicious focus areas are contained or not can be distinguished according to the difference of the texture features, particularly the texture features at the shape boundaries, and the focus types can be further identified according to the texture features and the shape feature differences.
Based on the above recognition basis, the present invention provides an enhanced CT image rectal cancer stage auxiliary diagnosis system based on deep learning, as shown in fig. 1. The system comprises an enhanced CT image input module, an enhanced CT image and labeling database, an image preprocessing module, a rectum lesion area judging module, a lesion lymph identifying module, a lesion feature extracting module, a comprehensive diagnosis module and a visualization module, wherein:
the enhanced CT image input module is used for inputting a rectum enhanced CT image;
The enhanced CT image and labeling database is used for storing the rectum CT image input by the CT image input module and enhancing a CT rectum cancer image data set and a labeling data set;
The image preprocessing module is used for carrying out noise reduction and image enhancement processing on the enhanced CT image;
the rectum lesion area judging module is used for judging a suspected rectum tumor lesion area and dividing the lesion area;
the pathological change lymph identification module is used for identifying periintestinal pathological change lymph nodes;
The focus feature extraction module is used for identifying and collecting (focus) outline feature data such as the thickness of a rectal tumor focus region pipe wall, the density difference between a tumor and a normal pipe wall, the presence or absence of burr-shaped protrusions around a focus, the presence or absence of adjacent structure involvement and the like, and counting the number of pathological lymph nodes around the intestine to form a rectum pathological feature parameter set;
The comprehensive diagnosis module is used for fusing identification information of a rectum lesion area, identification information of lesion lymph nodes and lesion characteristics, and combining TN stage priori knowledge base comparison to give a result of TN stage auxiliary diagnosis of the rectum cancer.
The visualization module is used for displaying the input enhanced CT image and marking the thickness of the wall of the rectal focus, the Zhou Maoci-shaped protrusions of the kitchen range, the affected adjacent structure, the lesion lymph node focus information and TN stage results on the image;
Further, the operation and execution of the rectal cancer stage auxiliary diagnosis system comprises the following steps:
s1, constructing an enhanced CT rectal cancer image data set and a labeling database;
S2, performing format conversion and image noise reduction on the enhanced CT image;
s3, a rectum lesion area judging module utilizes the reinforced CT rectal cancer image data set and training data in a labeling database to train a self-attention-based deep learning model, judges whether the input reinforced CT image contains a suspected rectum tumor (focus) shape area or not, and segments a focus area;
s4, identifying periintestinal lesion lymph nodes by adopting a sequence self-adaptive feature fusion method according to CT image lymph node features;
s5, a focus feature extraction module identifies and collects outline feature data of the wall thickness, burr protrusion and the like (focus) of a focus region of the rectal tumor, counts the number of periintestinal lesion lymph nodes and forms a rectum lesion feature parameter set;
And S6, (a comprehensive diagnosis module) fusing the identification information of the rectal tumor lesion area, the identification information of the lesion lymph node and the lesion characteristic information, and combining with a TN stage priori knowledge base (comparison), so as to realize the TN stage auxiliary diagnosis of the rectal cancer.
In step S1, the step of constructing the enhanced CT rectal cancer image dataset and the labeling database includes:
S11, defining a data labeling format, and distributing different labels to rectum sections and periintestinal lymph corresponding to different rectal cancer stages, wherein 4 different labels are defined for the intestinal wall, and the stage and the label correspond to the conditions of T0-label1, T1-label2, T2-label3, T3-label4 and T4-label5. Two different labels are regulated for lymph, and the states correspond to the labels, namely normal lymph-label 6 and suspicious lymph-label 7;
s12, marking the enhanced CT image according to a specified data marking format by utilizing an image marking tool and combining a pathology knowledge base with the characteristics of the enhanced CT image;
S13, storing the annotation file and the original image file correspondingly, storing corresponding patient information, and constructing a data set, wherein the data set file structure is shown in figure 2.
Further, in step S2, the image data preprocessing step includes:
s21, uniformly converting the enhanced CT image data stored in the original medical image in a dicom IMAGE SERIES form into NiFTI data format, so that the original CT rectal cancer image data format is consistent with the corresponding labeling label data format;
S22, reading converted NiFTI data by using a medical image processing library based on a deep learning frame, integrally converting CT rectal cancer image data and labeling tag data into a tensor data structure processed by the deep learning frame, and establishing a mapping relation between original image data and tag data;
S23, performing operations such as image resampling, noise reduction, random affine transformation, channel number addition in image size and the like on the image data and the label data which establish the mapping relation, so that image enhancement is realized, and the deep learning network performance is convenient to improve.
Further, in step S3, the step of determining the suspicious lesion area based on the self-attention machine deep learning model includes:
S31, firstly inputting the preprocessed data into a feature extraction convolution network, and obtaining a feature map of a corresponding depth level after feature extraction of the enhanced CT image data is completed through a series of downsampling and pooling operations;
S32, introducing a self-attention mechanism, acquiring global feature information of the CT image and capturing a channel relation, and improving feature representation capability. Each channel of the feature map is correspondingly compressed, importance of different channels is obtained through operations such as a full connection layer, an activation function and the like after compression, and the importance is converted into an attention vector, so that a model can be better helped to distinguish a rectal wall area from a background organ, and rectal wall thickness measurement is facilitated;
S33, fusing the obtained feature importance degree weight into an original deep learning network structure feature map, further guiding the network to focus on a focus area, and realizing the fusion of a focus mechanism. The formula is expressed as follows:
A=Att(X,θ)=δ(W2δ(W1GAP(X)))# (1)
Y=AX# (2)
Wherein A is the corresponding dimension weight obtained after attention calculation, att () is the attention calculation operation function, X is the feature map data extracted by the feature extraction convolution network, and θ is the network parameter. Delta is ReLu activation function, which is used to provide nonlinear gating operation in the network, W1 and W2 correspond to two full-connection layers, feature dimension reduction is realized, GAP () is global average pooling operation function, and Y is feature (function value) of feature map X after being calculated by attention module.
The specific implementation form of the self-attention model is shown in fig. 3 [ in the diagram, the size of a feature diagram obtained by the feature extraction network is n×h×w, where N is the channel dimension of the feature diagram, H is the height of the feature diagram, and W is the width of the feature diagram. And B, obtaining an N-dimensional channel dimension weight vector after the feature map is subjected to global pooling operation. C is a characteristic diagram after full connection and ReLu activation functions, D is a characteristic diagram after C is subjected to attention calculation and is adjusted to be NxH2;
the attention of the network is continuously adjusted, and the network is guided to pay further attention to the focus area in repeated training so as to improve the discrimination capability of the network to the rectal tumor focus area. Its overall deep learning network block diagram is shown in fig. 4.
Further, in step S4, the suspicious lymph node identification processing step based on the sequence adaptive feature fusion includes:
(merging multiple sequence feature images into one feature image through a feature extraction convolution network, extracting voxel information, continuous change information among sequences, similarity and other features)
S41, comprehensively classifying and judging the characteristic information of the fusion sequence characteristics and the multi-frame data to obtain suspicious lymph node positions and information in the characteristic map;
S42, mapping the information such as the position of the lymph node in the feature map back to the original CT image, so that the position of the suspicious lymph node in the original image is positioned. A block diagram of suspicious lymph identification network based on sequence fusion is shown in fig. 5.
Further, in step S6, the comprehensive diagnosis method fuses the information of the rectum lesion, the lymph metastasis, the lesion feature and other aspects, and combines the medical pathology information base and the TN stage priori knowledge base to make the discrimination steps as follows:
s61, normalizing the suspicious lesion position information and suspicious lymph node information extracted by the feature extraction convolutional network to achieve consistency of data formats;
s62, eliminating semantic differences among different information through corresponding fusion networks, realizing feature splicing and fusion, and fusing lesion lymph node identification information and (focus) appearance feature information;
S63, comparing with a TN stage priori knowledge base, and correspondingly classifying and distinguishing the integrated information to realize the clinical TN stage auxiliary diagnosis of the rectal cancer;
s64, mapping the diagnosis result back to the original CT image, and realizing corresponding TN stage labeling and visual display. The resulting visual annotation is shown in fig. 6.
Specifically, the whole network framework is trained by a three-stage subnet network.
The first subnetwork is training of a rectal suspected lesion area discrimination network. And training the tumor lesion area discrimination network by using the enhanced CT image training sample set in the image library through the training segmentation network. And training the number of training samples by adopting a batch size 16,32,64,128,256 and the like, and intercepting the parameter with the best training result as the final super parameter. The data set is divided into a training set, a verification set and a test set, and the specific dividing ratio is 6:2:2. In the training process, along with the increase of training iteration times, the accuracy of the verification set is continuously improved, but the test set has a trend of rising and then falling, namely the network has a trend of over fitting. And analyzing the performance of the test set, and performing fine adjustment of 3% of the upper and lower amplitude of the network to obtain the optimal performance point for the diagnosis of the suspected focus. And saving the parameters and the network weights, and ending the first sub-network training.
The second subnetwork is a suspicious lymph node identification network based on sequence-adaptive features. The network initially trains accordingly with a single frame of suspicious lymph node data. And training the number of training samples by adopting a batch size 16,32,64,128,256 and the like, and intercepting the parameter with the best training result as the final super parameter. The data set dividing ratio is the same as that of the first sub-network dividing form, the training form is similar, after a single-frame picture training model result is obtained, the network structure is further improved, multi-frame sequence information is fused for training, the final experimental accuracy is improved by 3% compared with the network accuracy utilizing the single-frame information, and the parameters are further fine-tuned, so that better performance is obtained.
And the third network framework carries out joint training on the two sub-networks, and carries out comprehensive stage diagnosis on the rectum cancer by utilizing a clinical medicine priori knowledge base through operations such as a full connection layer and the like. And after the combined training, training set data are further utilized to finely adjust the whole network frame, so that the recognition accuracy is improved by 2%.
The foregoing is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art, who is within the scope of the present invention, should make equivalent substitutions or modifications according to the technical scheme of the present invention and the inventive concept thereof, and should be covered by the scope of the present invention.