CN114494132B - Disease classification system based on deep learning and spatial statistical analysis of fiber tracts - Google Patents
Disease classification system based on deep learning and spatial statistical analysis of fiber tracts Download PDFInfo
- Publication number
- CN114494132B CN114494132B CN202111603683.6A CN202111603683A CN114494132B CN 114494132 B CN114494132 B CN 114494132B CN 202111603683 A CN202111603683 A CN 202111603683A CN 114494132 B CN114494132 B CN 114494132B
- Authority
- CN
- China
- Prior art keywords
- image
- anisotropic
- images
- matrix
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 44
- 201000010099 disease Diseases 0.000 title claims abstract description 36
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 title claims abstract description 36
- 239000000835 fiber Substances 0.000 title claims abstract description 30
- 238000007619 statistical method Methods 0.000 title claims abstract description 19
- 239000011159 matrix material Substances 0.000 claims abstract description 75
- 210000004556 brain Anatomy 0.000 claims abstract description 53
- 238000013145 classification model Methods 0.000 claims abstract description 30
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 19
- 239000000284 extract Substances 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000013136 deep learning model Methods 0.000 claims abstract description 3
- 238000012549 training Methods 0.000 claims description 36
- 208000024827 Alzheimer disease Diseases 0.000 claims description 14
- 238000002598 diffusion tensor imaging Methods 0.000 claims description 14
- 238000009792 diffusion process Methods 0.000 claims description 9
- 210000004885 white matter Anatomy 0.000 claims description 8
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 6
- 230000015654 memory Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 230000009466 transformation Effects 0.000 claims description 5
- 230000004886 head movement Effects 0.000 claims description 4
- 238000012216 screening Methods 0.000 claims description 4
- 210000003625 skull Anatomy 0.000 claims description 4
- 238000010998 test method Methods 0.000 claims description 4
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 238000002610 neuroimaging Methods 0.000 claims 1
- 238000004590 computer program Methods 0.000 description 5
- 238000002595 magnetic resonance imaging Methods 0.000 description 4
- 238000011176 pooling Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 101100045546 Arabidopsis thaliana TFCE gene Proteins 0.000 description 1
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 125000004435 hydrogen atom Chemical group [H]* 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000013421 nuclear magnetic resonance imaging Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 201000000980 schizophrenia Diseases 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
- G06T2207/10092—Diffusion tensor magnetic resonance imaging [DTI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention discloses a disease classification system based on deep learning and fiber bundle space statistical analysis, which comprises an acquisition module, a preprocessing module, an image generation module, a voxel extraction module, a classification module and a disease classification result, wherein the acquisition module acquires brain images to be classified, the preprocessing module is configured to preprocess the brain images to be classified, the image generation module generates an anisotropic fraction image, an average diffusivity image and a radial diffusivity image based on the preprocessed brain images, the voxel extraction module extracts a first voxel matrix from the anisotropic fraction image, extracts a second voxel matrix from the average diffusivity image, extracts a third voxel matrix from the radial diffusivity image, and splices the three matrixes to obtain a splicing matrix, and the classification module inputs the splicing matrix into a trained deep learning classification model to obtain the disease classification result. The voxels are extracted according to the blocks, so that the information is prevented from being lost, and the voxels are classified by using a deep learning model after being spliced, so that the method has good classification accuracy.
Description
Technical Field
The invention relates to the technical field of artificial intelligence and medical image classification, in particular to a disease classification system based on deep learning and fiber bundle space statistical analysis.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
Currently, alzheimer's Disease (AD) diagnosis based on DTI images mainly includes a method based on a region of interest (region of interest, ROI), a method based on a structural connection matrix (Structural connection matrix, SCN), and a method of fiber bundle space statistics analysis (TBSS).
However, these methods have a problem that in the method based on the region of interest (region of interest, ROI), the region of interest needs to be defined in advance, and brain regions with significant differences are identified, which usually requires a doctor to have extensive clinical experience, puts high demands on the quality of the doctor, and often causes a misjudgment.
For the method of the structural connection matrix (Structural connection matrix, SCN), a brain structural network can be obtained according to the result of fiber tracing, and the network is analyzed to achieve the effect of disease diagnosis, but the method emphasizes the importance of the brain network, ignores some other important characteristics, and causes lower recognition accuracy.
Voxels with significant differences in the brain can be identified based on a fiber bundle based SPATIAL STATISTICS (TBSS) method, but the single voxels contain less information, and the lesion is usually a whole area, so that the analysis of the voxels only easily causes information loss and affects the final identification accuracy.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a disease classification system based on deep learning and fiber bundle space statistical analysis, and voxels extracted by a fiber bundle space statistical analysis (TBSS) method are used, and the aim of disease identification is fulfilled by taking a block with voxel coordinates as the center and splicing according to the spatial positions of the extracted blocks and then classifying by a deep learning network because the part of the brain of a patient where a lesion appears is usually a block rather than a voxel point.
In a first aspect, the present invention provides a disease classification system based on deep learning and fiber bundle space statistical analysis;
a disease classification system based on deep learning and fiber bundle spatial statistical analysis, comprising:
An acquisition module configured to acquire brain images to be classified;
A preprocessing module configured to preprocess brain images to be classified;
an image generation module configured to generate an anisotropic score image (FA), an average diffusivity image (MD), and a radial diffusivity image (RD) based on the preprocessed brain image;
The voxel extraction module is configured to extract a first voxel matrix from the anisotropic score image, extract a second voxel matrix from the average diffusivity image, extract a third voxel matrix from the radial diffusivity image, and splice the three matrices to obtain a spliced matrix;
and the classification module is configured to input the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
In a second aspect, the present invention also provides an electronic device, including:
A memory for non-transitory storage of computer readable instructions, and
A processor for executing the computer-readable instructions,
Wherein the computer readable instructions, when executed by the processor, perform the steps of:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
In a third aspect, the present invention also provides a storage medium storing non-transitory computer readable instructions, wherein the non-transitory computer readable instructions, when executed by a computer, perform the steps of:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
In a fourth aspect, the present invention also provides a computer program product comprising a computer program for performing the following steps when run on one or more processors:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
Compared with the prior art, the invention has the beneficial effects that:
The invention applies deep learning to Alzheimer's disease classification, provides the Alzheimer's disease classification system based on the deep learning and fiber bundle space statistical analysis on the basis of a TBSS method, extracts voxels according to blocks, prevents information loss, and classifies the voxels by using a deep learning model after splicing, thereby having better classification accuracy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a schematic view of the basic structure of the present invention;
FIGS. 2 (a) -2 (c) are extracted mean FA skeleton diagrams of the present invention;
fig. 3 (a) -3 (c) are the differential voxels labeled P <0.05 using randomise according to the present invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, unless the context clearly indicates otherwise, the singular forms also are intended to include the plural forms, and furthermore, it is to be understood that the terms "comprises" and "comprising" and any variations thereof are intended to cover non-exclusive inclusions, such as, for example, processes, methods, systems, products or devices that comprise a series of steps or units, are not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or inherent to such processes, methods, products or devices.
Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
All data acquisition in the embodiment is legal application of the data on the basis of meeting laws and regulations and agreements of users.
Diffusion tensor imaging (DTI, diffusion tensor image), a new method of describing brain structures, is a special form of magnetic resonance imaging (MRI, magnetic Resonance Imaging). Diffusion tensor imaging (DTI, diffusion tensor image) is the only imaging technology means capable of displaying the white matter fiber trace in living body and quantitatively displaying abnormal change of the white matter, and has important significance for early identification, evaluation, prognosis and the like of diseases. For example, if nuclear magnetic resonance imaging is to track hydrogen atoms in water molecules, then diffusion tensor imaging is plotted against the direction of water molecule movement. The diffusion tensor imaging map can reveal how brain tumors affect nerve cell connection, and guide medical staff to perform brain surgery. It also reveals subtle abnormal changes associated with Alzheimer's disease, schizophrenia, and dysreading.
Example 1
The present embodiment provides a disease classification system based on deep learning and fiber bundle space statistical analysis;
as shown in fig. 1, a disease classification system based on deep learning and fiber bundle space statistical analysis, comprising:
An acquisition module configured to acquire brain images to be classified;
A preprocessing module configured to preprocess brain images to be classified;
An image generation module configured to generate an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
The voxel extraction module is configured to extract a first voxel matrix from the anisotropic score image, extract a second voxel matrix from the average diffusivity image, extract a third voxel matrix from the radial diffusivity image, and splice the three matrices to obtain a spliced matrix;
and the classification module is configured to input the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
Further, the acquiring of the brain image to be classified refers to acquiring diffusion tensor imaging of brain magnetic resonance imaging.
Further, the preprocessing module is configured to:
Eliminating eddy currents of brain images to be classified and distortion of head movements, and removing skull to obtain brain images.
Illustratively, the preprocessing module uses software that is FSL (FMRIB Software Library, FSL) software developed by oxford brain functional magnetic resonance imaging center (FMRIB Center).
Further, the image generation module is configured to:
and processing the preprocessed brain image by using a least square method, and performing diffusion tensor imaging to generate an anisotropic fraction image, an average diffusivity image and a radial diffusivity image.
Further, the extracting the first voxel matrix from the anisotropic score image specifically comprises:
Extracting a cube block by taking the coordinates of each selected differential voxel in the training set as the center;
and splicing all the cube blocks according to the space positions to obtain a first voxel matrix.
Further, the coordinates of each of the selected differential voxels in the training set, the acquisition process comprises the following steps:
registering all anisotropic fractional images to a standard space in a training set;
Then affine transforming the registered images into MNI152 (Montreal Neurosciences Institute, MNI) standard space, wherein the MNI152 is provided by Montreal neuroscience research center (Montreal Neurosciences Institute, MNI) and is obtained by adopting structural magnetic resonance image data weighted average of 152 healthy people;
synthesizing the anisotropic fractional images of all testees in the training set into a 4D file in a standard space, and generating average anisotropic fractional images of all testees at the same time;
Obtaining a skeleton of the average anisotropic fractional image according to the average anisotropic fractional image, wherein the skeleton is a white matter fiber bundle skeleton, and the white matter fiber bundle skeleton is obtained according to average FA images generated by all testees;
taking the skeleton of the average anisotropic score image as the skeleton of the anisotropic score images of all the testees in the training set;
and determining voxels with differences between the skeletons of the anisotropic score images of the known patients in the training set and the skeletons of the anisotropic score images of the known normal control group in the training set, and screening the voxels with differences to obtain the coordinates of each screened voxel with differences in the training set.
Further, the method for determining the voxels with differences between the patient anisotropic score image in the training set and the normal control group, and screening the voxels with differences to obtain the coordinates of each screened voxel with differences in the training set, specifically comprises the following steps:
Using randomise tools provided by FSL to determine voxels with differences between the patient anisotropic score images and the normal control group, and correcting by using a multiple comparison mode;
And screening out points with P values smaller than the set threshold as points with significant differences. The P value is a probability of significant difference, and P <0.05 is a value with statistical difference according to the P value obtained by the significance test method.
Further, the specific steps of extracting the second voxel matrix from the average diffusivity image or extracting the third voxel matrix from the radial diffusivity image are consistent with the specific steps of extracting the first voxel matrix from the anisotropic score image.
Further, a deep learning classification model, including LeNet, alexNet, VGGNet, inception-v4, resNet, denseNet.
The LeNet classification model is one of earliest convolutional neural networks, features are extracted by means of convolution, parameter sharing, pooling and other operations, a large amount of calculation cost is avoided, and finally the fully connected neural network is used for classification and identification.
The AlexNet classification model uses an 8-layer neural network, 5 convolutional layers and 3 fully-connected layers (3 convolutional layers followed by a max pooling layer) with Relu (modified linear unit) used successfully as the activation function for the convolutional neural network.
The VGGNet classification model consists of 5 layers of convolution layers, 3 layers of full-connection layers and softmax (normalized function) output layers, wherein max-pooling (maximum pooling layer) is used for separation between layers, and active units of all hidden layers are a ReLU (modified linear unit) function, which is different from AlexNet in the number of sub-layers of each convolution layer.
The Inception-v4 classification model does not use a residual structure, so that the training speed is improved, and the classification precision is ensured.
ResNet (residual network) classification model adds direct channels in the network, where the previous network structure does a nonlinear transformation of the input, while ResNet allows to preserve a certain proportion of the output of the previous network layer, allowing the original input information to pass directly into the following layer.
The DenseNet classification model establishes dense connections of all the previous layers to the next layer in the neural network, i.e., each layer will accept all its previous layers as its additional inputs.
Further, the training process of the trained deep learning classification model comprises the following steps:
Constructing a training set, wherein the training set is an Alzheimer image of a known disease classification result, 70 cases of Alzheimer patients and 120 cases of normal people in the training set;
preprocessing the training set to obtain a plurality of splicing matrixes of the known disease classification result labels, wherein each Alzheimer image corresponds to one splicing matrix;
constructing a deep learning classification model;
And inputting the training set into a deep learning classification model, training the model, and stopping training when the loss function value of the model is not reduced any more, so as to obtain the trained deep learning classification model.
Illustratively, DTI images in a common dataset are first acquired as a training set. The invention adopts data of 2 public data sets, including ADNI-2 and ADNI-3 data sets in an Alzheimer's disease neuroimaging program database, 50 ADNI-2 data sets comprise 50 AD,64 NC, and 51 AD and 91 NC in a baseline ADNI-3 data set, and data parameters are as follows, magnetic field intensity=3.0T, turnover angle=90 DEG, b=1000 s/mm 2, pixel size=1.36×1.36mm 2, repetition Time (TR) =9050 ms, echo Time (TE) =62.8 ms and slice thickness=2.7 mm.
Illustratively, the preprocessing of the training set uses (FMRIB Software Library, FSL) software developed by the oxford brain functional magnetic resonance imaging center (FMRIB Center) in the preprocessing section to eliminate eddy current and head movement distortion in the DTI data and extract a 3D image without diffusion gradients from the corrected image as a reference image. From the reference 3D image, we removed the skull portion using a Brain Extraction Tool (BET) to obtain a brain image.
A tensor calculation section generates FA, MD, RD images from the diffusion tensor model in the FSL. And a bayesian estimation of the diffusion parameter is used to detect intersecting fiber voxels.
In the voxel extraction stage, in order to ensure the consistency of the images in space, the FA images of all the testees need to be registered to a standard space.
For nonlinear registration, all FA images need to be registered to a standard space, and the standard image used in registration is FMRIB58_fa standard space image with a size of 182×218×182.
The registered image is then affine transformed into MNI152 standard space, and combining the nonlinear registration with the affine transformation into the FA image, transforming each image into a 1 x 1mmm 3 MNI152 space.
The resulting nonlinear transformation is applied to all subjects to bring them into the standard space.
And combining all the tested FA images into a 4D file, generating all the tested average FA images and a skeleton obtained according to the average FA images, wherein the skeleton is a white matter fiber bundle skeleton, and obtaining the white matter fiber bundle skeleton according to the average FA images generated by all the tested persons. Changing the threshold value of the FA skeleton, wherein the threshold value of the skeleton determines the number of voxels contained in the skeleton, different sizes of skeletons can be obtained according to different threshold values, the larger the threshold value is, the fewer the obtained skeleton voxels are, the corresponding condition of each tested FA image and the FA skeleton under different threshold values is checked, the FA skeleton with the threshold value of 0.2 is extracted and applied to all tested person images, and the obtained results are shown in fig. 2 (a), 2 (b) and 2 (c). Next, a randomise tool provided by FSL is used to perform statistical analysis on voxels corresponding to the extracted skeleton, and a multiple comparison correction method based on TFCE is used to correct the voxels having differences between the AD group and the NC group, where the AD group is an Alzheimer's Disease (AD), the NC group is a Normal control group (NC), a point with P <0.05 is selected as a point with significant differences, the P value is a probability with significant differences, and the P <0.05 is a value with statistical differences according to the P value obtained by the significance test method. The results obtained are shown in FIG. 3 (a), FIG. 3 (b) and FIG. 3 (c). The same operation is performed for the MD image and the RD image.
In the splicing part of the blocks, firstly, a 9 x 9 block is extracted by taking each voxel coordinate as a center according to the voxels found in the previous stage, and if two voxels are adjacent, in order to avoid that the obtained blocks are identical, the adjacent voxel is removed. And then sequentially splicing the FA and MD images according to the space positions to obtain 270 x 18 matrixes, obtaining a 270 x 9 matrix by the RD image, and splicing the obtained three matrixes to obtain a 270 x 45 large matrix which is used as the input of the deep learning classification model.
In the classification stage, a deep learning classification model was used to predict disease, and six classification models including LeNet, alexNet, VGGNet, inception-v4, resNet, denseNet were used in total, and the results are shown in Table 1. The software used for deep learning is Python software.
The invention firstly eliminates the distortion of vortex and head movement in DTI data, and obtains a 3D image without diffusion gradient as a reference image, then removes a skull part to obtain a brain image, and then generates an anisotropy fraction (fractional anisotropy, FA) image, an average diffusion (MD) image and a radial diffusion (radial diffusivity, RD) image according to a tensor model.
In the nonlinear registration section, all the anisotropic score images were registered to FMRIB58_fa standard images, FMRIB58_fa standard images were standard template images provided by FSL software, were standard images synthesized using 58 normal adult FA images. An average FA skeleton is extracted and an appropriate threshold is selected, and the data is then statistically analyzed using randomise tools, randomise tools being statistical verification tools provided by the FSL software. Then, the point with P <0.05 is extracted, the P value is the probability of having significant difference, and the P <0.05 is the value with statistical difference according to the P value obtained by the significance test method. And then, extracting a block by taking the point coordinates as the center, splicing according to the space positions of the block, performing the same operation on the MD and RD images to obtain three-dimensional matrixes, merging the three-dimensional matrixes into a matrix according to the third depth, taking the matrix as a deep learning input, and classifying by using a deep learning network. Six classification models are used here to predict disease, including LeNet, alexNet, VGGNet, inception-v4, resNet, denseNet.
TABLE 1 Classification index according to the invention Using six models
Example two
The embodiment also provides an electronic device comprising one or more processors, one or more memories and one or more computer programs, wherein the processors are connected with the memories, the one or more computer programs are stored in the memories, and when the electronic device is operated, the processors execute the one or more computer programs stored in the memories so as to enable the electronic device to execute the following steps:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
Example III
The present embodiment also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, perform the steps of:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (7)
1. A disease classification system based on deep learning and fiber bundle space statistical analysis, comprising:
An acquisition module configured to acquire brain images to be classified;
A preprocessing module configured to preprocess brain images to be classified;
An image generation module configured to generate an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
The voxel extraction module is configured to extract a first voxel matrix from the anisotropic score image, extract a second voxel matrix from the average diffusivity image, extract a third voxel matrix from the radial diffusivity image, and splice the three matrices to obtain a spliced matrix;
The method for extracting the first voxel matrix from the anisotropic score image specifically comprises the following steps:
Extracting a cube block by taking the coordinates of each selected differential voxel in the training set as the center;
Splicing all the cube blocks according to the space positions to obtain a first voxel matrix;
The coordinate of each selected differential voxel in the training set is obtained by the following steps:
registering all anisotropic fractional images to a standard space in a training set;
Then affine transforming the registered images into a standard space;
synthesizing the anisotropic fractional images of all testees in the training set into a 4D file in a standard space, and generating average anisotropic fractional images of all testees at the same time;
Obtaining a skeleton of the average anisotropic fractional image according to the average anisotropic fractional image, wherein the skeleton is a white matter fiber bundle skeleton, and the white matter fiber bundle skeleton is obtained according to the average anisotropic fractional image generated by all testees;
taking the skeleton of the average anisotropic score image as the skeleton of the anisotropic score images of all the testees in the training set;
Determining voxels with differences between the skeletons of the anisotropic score images of the known patients in the training set and the skeletons of the anisotropic score images of the known normal control group in the training set, and screening the voxels with differences to obtain coordinates of each screened voxel with differences in the training set, wherein the method specifically comprises the following steps:
Using randomise tools provided by FSL to determine voxels with differences between the patient anisotropic score images and the normal control group, and correcting by using a multiple comparison mode;
selecting points with P values smaller than a set threshold as points with obvious differences;
The P value is the probability of having significant difference, and P < 0.05 is taken as the value with statistical difference according to the P value obtained by the significance test method;
The classification module is configured to input the splicing matrix into the trained deep learning classification model to obtain a disease classification result;
The deep learning model training module is configured to acquire diffusion tensor imaging images in a common dataset as a training set, adopt data of 2 public datasets including ADNI-2 and ADNI-3 datasets in an Alzheimer's disease neuroimaging plan ' database, include a plurality of AD and NC in a baseline ADNI-2 dataset, detect intersecting fiber voxels by using Bayesian estimation of diffusion parameters, register anisotropic score images of all subjects to a standard space in a voxel extraction stage, run nonlinear registration, register anisotropic score images of all subjects to the standard space, register standard images used in registration to be FMRIB58_FA standard space images, affine transform the registered images to MNI152 standard space, combine nonlinear registration and affine transformation to anisotropic score images, transform each image to MNI152 space, apply the obtained nonlinear transformation to all subjects to bring them into the standard space, the AD is an Alzheimer's disease patient group, and NC is a normal control group.
2. The deep learning and fiber bundle space statistical analysis based disease classification system of claim 1, wherein the preprocessing module is configured to:
Eliminating eddy currents of brain images to be classified and distortion of head movements, and removing skull to obtain brain images.
3. The deep learning and fiber bundle space statistical analysis based disease classification system of claim 1, wherein the image generation module is configured to:
and processing the preprocessed brain image by using a least square method, and performing diffusion tensor imaging to generate an anisotropic fraction image, an average diffusivity image and a radial diffusivity image.
4. The deep learning and fiber bundle space statistical analysis based disease classification system of claim 1, wherein the trained deep learning classification model, training process comprises:
preprocessing the training set to obtain a plurality of splicing matrixes of the known disease classification result labels, wherein each Alzheimer image corresponds to one splicing matrix;
constructing a deep learning classification model;
And inputting the training set into a deep learning classification model, training the model, and stopping training when the loss function value of the model is not reduced any more, so as to obtain the trained deep learning classification model.
5. The disease classification system based on deep learning and fiber bundle space statistical analysis of claim 1, wherein the extraction of the second voxel matrix from the average diffusivity image or the extraction of the third voxel matrix from the radial diffusivity image is consistent with the specific step of extracting the first voxel matrix from the anisotropic score image.
6. An electronic device of a disease classification system based on deep learning and fiber bundle space statistical analysis as claimed in claim 1, comprising:
A memory for non-transitory storage of computer readable instructions, and
A processor for executing the computer-readable instructions,
Wherein the computer readable instructions, when executed by the processor, perform the steps of:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
7. A storage medium of a deep learning and fiber bundle space statistical analysis based disease classification system according to claim 1, wherein computer readable instructions are stored non-transitory, wherein the non-transitory computer readable instructions, when executed by a computer, perform the steps of:
acquiring brain images to be classified;
preprocessing brain images to be classified;
Generating an anisotropic score image, an average diffusivity image, and a radial diffusivity image based on the preprocessed brain image;
Extracting a first voxel matrix from the anisotropic score image, extracting a second voxel matrix from the average diffusivity image, extracting a third voxel matrix from the radial diffusivity image, and splicing the three matrixes to obtain a spliced matrix;
and inputting the splicing matrix into the trained deep learning classification model to obtain a disease classification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111603683.6A CN114494132B (en) | 2021-12-24 | 2021-12-24 | Disease classification system based on deep learning and spatial statistical analysis of fiber tracts |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111603683.6A CN114494132B (en) | 2021-12-24 | 2021-12-24 | Disease classification system based on deep learning and spatial statistical analysis of fiber tracts |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114494132A CN114494132A (en) | 2022-05-13 |
CN114494132B true CN114494132B (en) | 2024-12-27 |
Family
ID=81496522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111603683.6A Active CN114494132B (en) | 2021-12-24 | 2021-12-24 | Disease classification system based on deep learning and spatial statistical analysis of fiber tracts |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114494132B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115359305B (en) * | 2022-10-19 | 2023-01-10 | 之江实验室 | A precise positioning system for abnormal areas of brain fiber tracts |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2996746B1 (en) * | 2012-10-17 | 2015-11-13 | Assist Publ Hopitaux De Paris | METHOD FOR QUANTIFYING BRAIN LESIONS |
CN109691985A (en) * | 2018-12-19 | 2019-04-30 | 北京工业大学 | A kind of temporal epilepsy aided diagnosis method based on DTI technology and SVM |
CN111311585A (en) * | 2020-02-24 | 2020-06-19 | 南京慧脑云计算有限公司 | Magnetic resonance diffusion tensor brain image analysis method and system for neonates |
CN112990266B (en) * | 2021-02-07 | 2023-08-15 | 西安电子科技大学 | Method, device, equipment and storage medium for processing multi-mode brain image data |
CN113255721B (en) * | 2021-04-13 | 2024-03-22 | 浙江工业大学 | Tumor peripheral surface auditory nerve recognition method based on machine learning |
-
2021
- 2021-12-24 CN CN202111603683.6A patent/CN114494132B/en active Active
Non-Patent Citations (1)
Title |
---|
颞叶癫痫患者脑白质纤维束追踪空间统计分析与自动识别;赵地 等;生物医学工程学杂志;20170825;第34卷(第4期);第500-509页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114494132A (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Poulin et al. | Tractography and machine learning: Current state and open challenges | |
CN112365980B (en) | Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system | |
Bass et al. | Icam: Interpretable classification via disentangled representations and feature attribution mapping | |
Liu et al. | An enhanced multi-modal brain graph network for classifying neuropsychiatric disorders | |
Tochadse et al. | Unification of behavioural, computational and neural accounts of word production errors in post-stroke aphasia | |
Aslam et al. | Neurological Disorder Detection Using OCT Scan Image of Eye | |
Jabbar et al. | Using convolutional neural network for edge detection in musculoskeletal ultrasound images | |
Fashandi et al. | An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U‐nets | |
CN116468655B (en) | Brain development atlas and image processing system based on fetal magnetic resonance imaging | |
CN117237351B (en) | Ultrasonic image analysis method and related device | |
JP2022546344A (en) | Image processing for stroke feature acquisition | |
CN112489048B (en) | Automatic optic nerve segmentation method based on depth network | |
KR101284388B1 (en) | Method and apparatus for analyzing magnetic resonance imaging, and recording medium for executing the method | |
Li et al. | BrainK for structural image processing: creating electrical models of the human head | |
CN114494132B (en) | Disease classification system based on deep learning and spatial statistical analysis of fiber tracts | |
Al-Khasawneh et al. | Alzheimer’s Disease Diagnosis Using MRI Images | |
CN114373095B (en) | Alzheimer's disease classification system and method based on image information | |
CN114283406A (en) | Cell image recognition method, device, equipment, medium and computer program product | |
Zhu et al. | Discovering dense and consistent landmarks in the brain | |
Ma et al. | Multi-view brain networks construction for alzheimer’s disease diagnosis | |
CN115461790A (en) | Method and apparatus for classifying structure in image | |
CN115552479A (en) | Method and apparatus for classifying structure in image | |
CN107169955A (en) | A kind of intelligentized medical image processing devices and method | |
Chandra et al. | Corpus callosum segmentation from brain MRI and its possible application in detection of diseases | |
TWI477798B (en) | Method of automatically analyzing brain fiber tracts information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |