CN109584244B - Hippocampus segmentation method based on sequence learning - Google Patents
Hippocampus segmentation method based on sequence learning Download PDFInfo
- Publication number
- CN109584244B CN109584244B CN201811449294.0A CN201811449294A CN109584244B CN 109584244 B CN109584244 B CN 109584244B CN 201811449294 A CN201811449294 A CN 201811449294A CN 109584244 B CN109584244 B CN 109584244B
- Authority
- CN
- China
- Prior art keywords
- segmentation
- convolution
- image
- layer
- channel number
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 36
- 210000001320 hippocampus Anatomy 0.000 title abstract description 16
- 230000000971 hippocampal effect Effects 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 14
- 210000004556 brain Anatomy 0.000 claims abstract description 12
- 101000831205 Danio rerio Dynein axonemal assembly factor 11 Proteins 0.000 claims abstract description 11
- 102100024282 Dynein axonemal assembly factor 11 Human genes 0.000 claims abstract description 11
- 241001559542 Hippocampus hippocampus Species 0.000 claims abstract description 11
- 101000831210 Homo sapiens Dynein axonemal assembly factor 11 Proteins 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims abstract description 10
- OVSKIKFHRZPJSS-UHFFFAOYSA-N 2,4-D Chemical compound OC(=O)COC1=CC=C(Cl)C=C1Cl OVSKIKFHRZPJSS-UHFFFAOYSA-N 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000001902 propagating effect Effects 0.000 claims abstract description 4
- 238000010606 normalization Methods 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 abstract description 6
- 238000005481 NMR spectroscopy Methods 0.000 abstract description 4
- 210000000056 organ Anatomy 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 4
- 208000024827 Alzheimer disease Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 208000020016 psychiatric disease Diseases 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 201000008914 temporal lobe epilepsy Diseases 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002685 pulmonary effect Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 201000000980 schizophrenia Diseases 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of computer vision and deep learning, in particular to a hippocampus segmentation method based on sequence learning. The method comprises the following steps: step 1, preprocessing an original image set A; step 2, constructing a network model, wherein the sea horse segmentation network model comprises a coding part, a two-way convolution long and short memory network and a decoding part; step 3, training a model; forward propagating the anatomical plan atlas D, E, F yields a single iteration result and computes a weighted model J, K, L of the loss function through back propagation. The invention realizes the efficient automatic and accurate segmentation of the hippocampal structure in the human brain nuclear magnetic resonance image by using a method based on a deep learning network, and has higher operation speed while ensuring high segmentation precision. And the expandability is strong: in addition to being used for detection of the hippocampus, the network of the present invention can be retrained for use in detection and segmentation of other organs or tissues.
Description
Technical Field
The invention relates to the field of computer vision and deep learning, in particular to a hippocampus segmentation method based on sequence learning.
Background
Hippocampus is an important component of the cerebral nervous system, and abnormalities in the volume and function of the hippocampus are closely related to many mental diseases, such as: temporal lobe epilepsy (Temporal Lobe Epilepsy, TLE), alzheimer's Disease (AD), schizophrenia (schizophrrenia), and the like. Therefore, the sea horse can be accurately segmented, doctors can be assisted to diagnose and treat related mental diseases, and the method has great medical value. The nuclear magnetic resonance image can provide three-dimensional brain tissue information with rich contrast and high resolution, and is important data for researching the morphology of the sea horse. Therefore, studying the volumetric morphology of the hippocampus in brain MRI images, achieving accurate segmentation of the three-dimensional hippocampus is also becoming an important task in medical image studies.
Conventional methods of segmenting hippocampus include manual segmentation methods, semi-automatic segmentation methods, and conventional automatic segmentation methods, but these methods are tedious and time consuming and are not ideal in terms of segmentation accuracy and efficiency.
In recent years, deep learning has been rapidly developed in the field of artificial intelligence, particularly image processing. The method has good research results in aspects of image classification, detection and segmentation, and the sequence learning in deep learning is widely applied.
Disclosure of Invention
The invention provides a hippocampus segmentation method based on sequence learning, which aims to solve the problems of low segmentation accuracy and long segmentation time of hippocampus segmentation in brain MRI images.
The sea horse segmentation method based on sequence learning comprises the following steps:
the original image set A comprises N groups of brain MRI hippocampal image files in NIfTI format.
The image file of the present invention N is 120, and includes the image 62 group with size 192×192×160, the image 35 group with size 256×256×166, and the image 23 group with size 256×256×180.
1.1 cropping images
And (3) counting the positions and the areas of the sea horse bodies in the 120 groups of images, and cutting the image files in the original image set A into image files with the size of the table A to obtain an image set B.
Table A clipping regions for three different size images
Wherein (x, y, z) 1 ) Is the left hippocampus, (x, y, z) 2 ) Is the right hippocampus;
further, the image file is cut into the image file with the size of 80 x 40 and the size of 80 x 40, so that an effective area containing the sea horse can be obtained, the segmentation accuracy is higher, and the training speed can be accelerated.
1.2 data normalization
And (3) carrying out data normalization processing on the image set B to enable the range of voxel values in the image set B to be 0,1, and obtaining a normalized image set C.
1.3 data serialization
The image set C is serialized in three directions, coronal, sagittal, and transverse, respectively, to generate three sets D, E, F of anatomical plan views under different views, each set comprising a sequence of slices.
The anatomical plan view set D includes 80 slice sequences, the anatomical plan view set E includes 80 slice sequences, and the anatomical plan view set F includes 40 slice sequences.
Step 2, constructing a sea horse segmentation network model;
the sea horse segmentation network model of the invention comprises an encoding part, a two-way convolution long and short memory network (BDC-LSTM) and a decoding part, and the whole structure diagram is shown in figure 2.
Firstly, the anatomical plan view set D, E, F is subjected to feature extraction through the coding part respectively and independently, then the result after feature extraction is sent to the BDC-LSTM for training, the spatial sequence relation of continuous slices in the anatomical plan view set is mined, and finally, the result after BDC-LSTM operation is subjected to up-sampling through the decoding part, so that end-to-end segmentation is realized. Only one set of anatomical plan atlases is sent into the network at a time for training.
Encoding part: the encoding portion is to perform feature extraction on slices in the anatomical plan atlas D, E, F under three different sets of views. The coding part comprises four groups of convolution networks and a maximum pooling layer, and the network structure diagram of the coding part is shown in fig. 3. The first group is a convolutional layer of 3*3 with a channel number of 16. To extract more features, the second set uses three different convolutions to extract information of multiple scales. The first is a convolution of 1*1 with 16 channels, the second is a convolution of 3*3 with 16 channels, and the third is a convolution of 5*5 with 16 channels. The third group is a convolution layer of 3*3 with the channel number of 16, and is used for performing feature extraction after performing aggregation operation on the feature graphs extracted by three different convolutions in the second group. The fourth group is a convolution layer of 3*3 with a channel number of 16. After four sets of convolutions, a maximum pooling layer is connected in order to reduce the size of the feature map.
Furthermore, the convergence problem of the network is realized, the Batch-normalization is added after the first group, the third group and the fourth group of convolution layers, and the Relu is adopted as the activation function.
The serialized anatomical plan atlas D, E, F is passed through the encoded section to obtain a corresponding set of feature maps G, H, I.
BDC-LSTM: BDC-LSTM is used to better mine the spatial sequence relationship of successive slices from the encoded three sets of feature maps G, H, I.
The Hochrite et al in 1997 proposed a long short memory network (LSTM) that successfully addresses the deficiencies of the original RNN and adds a "processor" to the RNN that determines whether information is useful or not, the structure of which processor acts is called cell state (cell). An extended form convolution long and short memory network (CLSTM) of LSTM is widely used if the input sequence is an image. Through combining the CLSTM with other convolution networks, the CLSTM can effectively utilize the correlation between input pictures, so that more accurate segmentation is realized.
The difference from the normal LSTM is that the CLSTM replaces matrix multiplication with convolution operations, thus preserving a longer series of spatial information. This is very effective for processing image sequences. The definition of CLSTM is as follows:
i t =σ(x t *W xi +h t-1 *W hi +b i )
f t =σ(x t *W xf +h t-1 *W hf +b f )
o t =σ(x t *W xo +h t-1 *W ho +b o )
where x represents the convolution operation. Where σ is a sigmoid function and tanh is a hyperbolic tangent function. The whole network has three gates, namely an input gate i t Forgetting door f t And an output gate o t 。b i ,b f ,b c ,b o Is a bias term, x t ,c t ,h t Is the input at time t, the cell state and the hidden state. W (W) ** Is a diagonal weight matrix of control value transitions. For example, the number of the cells to be processed,W hf is responsible for controlling how the forget gate obtains a value from the hidden state.
In order to increase the available information of the CLSTM, the close connection between each slice and the adjacent upper and lower slices is fully considered, and a two-way convolution long and short memory network (BDC-LSTM) is adopted in the invention. Two layers of CLSTM are used, one layer of CLSTM being forward in time sequence and one layer of CLSTM being reverse in time sequence (see fig. 4). Thus, the spatial sequence relation of the slices can be better mined.
Decoding part: the decoding part mainly upsamples the output of the BDC-LSTM to obtain the same resolution as the input image. The primary network structure is shown in fig. 5. The decoding section comprises a deconvolution layer of 3*3 with 16 channels and a convolution layer of 3*3 with 16 channels, and finally a convolution layer of 3*3 with 1 channels is connected.
Further, to achieve convergence, the deconvolution layer and the 3*3 convolution layer with 16 channels were followed by the addition of Batch-normalization, and the activation function used for the Relu.
The number of network layers is smaller than U-Net and 3D U-Net, the parameters of the network are reduced, the training time of the network is shortened, and the segmentation precision is higher than that of U-Net and 3D U-Net.
The two-way convolution long and short memory network (BDC-LSTM) is applied to the task of hippocampal segmentation, so that the spatial information in the 3D brain MRI image is better mined, and the segmentation accuracy is improved. The segmentation accuracy is higher than that of the CLSTM. The full convolution network is combined with BDC-LSTM, and the segmentation accuracy can be improved on the basis of a small number of feature extraction steps.
Step 3, training a model;
forward propagating the anatomical plan atlas D, E, F yields a single iteration result and calculates a weight model J, K, L of the loss function by back propagation.
The anatomical plan under the three views is trained to obtain three sets of weight models J, K, L for hippocampal segmentation, and the weight models J, K, L are averaged to obtain a final training model M.
Compared with the prior art, the invention has the beneficial effects that:
(1) By using a method based on a deep learning network, the high-efficiency automatic accurate segmentation of the hippocampal structure in the human brain nuclear magnetic resonance image is realized. Can help doctors to diagnose the Alzheimer's disease in early stage.
(2) Efficient automated accurate segmentation: the input human brain nuclear magnetic resonance image can be directly segmented to obtain a result, and the operation speed is relatively high while the high segmentation accuracy is ensured.
(3) The expandability is strong: in addition to being used for detection of the hippocampus, the network of the invention can be conveniently retrained, so that the network can be applied to detection and segmentation of other organs or tissues, such as fundus retina cutting, pulmonary nodule detection and the like.
Drawings
FIG. 1 is a schematic diagram of a workflow framework of the present invention.
Fig. 2 is an overall structure diagram based on deep learning provided by the invention.
Fig. 3 is a network configuration diagram of an encoding part provided by the present invention.
Fig. 4 is a network configuration diagram of the BDC-LCTM provided by the present invention.
Fig. 5 is a network configuration diagram of a decoding section provided by the present invention.
Detailed Description
The invention is further described below in conjunction with the detailed description.
To verify the effectiveness of the method, experiments were performed on an ADNI database, and the experimental data of the present invention consisted of 120 sets of brain MRI images, 120 sets including real patients and healthy comparison populations. To verify the performance of the model, the data was divided into 10 parts, using a 10 fold cross-validation experiment, 9 parts for training and 1 part for testing until all data was tested. Regarding the optimization algorithm of the model, a Nadam algorithm is adopted, the learning rate is set to be 0.001, and a gloot uniform distribution initialization method is used for weight initialization.
The hardware equipment is as follows: processor Intel Core i7-9700K CPU@4.2GHz; 32.0GB of memory (RAM); independent graphics cards, NVIDIA GeForce GTX 1070; system type, ubuntu 16.04; development tools, python and Keras frameworks.
And evaluating the effect of training the model M by adopting model verification. The evaluation of the result accuracy adopts a Dice Metric index which is common to the medical research industry, and the accuracy of the proposed segmentation algorithm is evaluated by using the Dice index.
The Dice Metric index is as follows:
where M represents the result of manual segmentation by the expert (gold standard), A represents the result of the algorithm automatic segmentation experiment, and V () represents the volume size within the region sought.
Specific implementations include the following four parts. The first part compares the CLSTM with the BDC-LSTM, the second part compares the single view with the multi-view split, the third part compares BDC-LSTM with the U-Net and 3D U-Net, and finally compares the method proposed by the present invention with other methods.
1CLSTM and BDC-LSTM comparison
After the same preprocessing operation was performed on 120 MRI images, the two segmentation results of the CLSTM and BDC-LSTM were compared, and the results are shown in Table 1.
TABLE 1 comparison of LSTM and CLSTM
It is known that the segmentation accuracy of BDC-LSTM is significantly higher than that of CLSTM. It is verified that BDC-LSTM is better able to learn information between slices than CLSTM.
2 single view and multiple view contrast
In a typical segmentation method based on a two-dimensional convolution network, a 3D brain MRI is segmented into 2D slices under a certain view, and then the 2D slices are sent to the network for training, and the segmentation result under a single view may not have high precision in consideration of different structures of the slices under different views. For this purpose the results under a single view and multiple views are compared in the present invention. For the same data set, three results under three views of a sagittal plane, a coronal plane and a cross section are obtained through the hippocampal segmentation model provided by the invention, then the three segmentation results are integrated, and the multi-view segmentation result is obtained through averaging. The results of the multi-view segmentation and the single view segmentation are shown in table 2.
Table 2 comparison of individual views and view integration
As can be seen from the segmentation accuracy Dice in table 2, the result of the multi-view fusion is better than the segmentation result of a single view. Since some very blurred structural boundaries under a certain view may be segmented very clearly under other views. Therefore, the multi-view fusion can fully consider the smoothness and the spatial coherence of the brain MRI image, and mutually supplement the information under multiple views to obtain a better segmentation effect.
Comparison of 3BDC-LSTM and U-Net, 3D U-Net
It is considered that U-Net and 3D U-Net are the dominant methods in the current medical image segmentation field. The U-Net processes the 2D slice, the 3D U-Net directly segments the three-dimensional image, and the BDC-LSTM network in the segmentation model of the study fully excavates the space information among the slices. After the same pretreatment operation was performed on the 120-group data sets, the data sets were divided by U-Net, 3-D U-Net and the method proposed in the present study, and the results are shown in Table 3. As can be seen from Table 3, BDC-LSTM accuracy is higher than the other two methods.
TABLE 3 comparison of BDC-LST, U-Net and 3D U-Net
Comparison of 4BDC-LSTM and other prior methods
The segmentation model of the present invention is compared to some recent approaches for studying hippocampal segmentation. Experiments were performed on different cases in the ADNI dataset and a complete quantitative comparison with these methods could not be made directly, but from the average Dice in table 4, the method of the invention is superior to other methods.
Table 4 comparison of hippocampal segmentation algorithms in ADNI database
The BDC-LSTM network provided by the invention can fully excavate the spatial information of the 3D brain MRI image, so that the segmentation accuracy is higher. Experimental results based on the ADNI database show that the method for segmenting the hippocampus based on the sequence learning network obtains segmentation results superior to other methods at present. In the investigation of 3D medical images, the model is able to perform segmentation tasks more easily and accurately.
Claims (5)
1. The sea horse segmentation method based on sequence learning is characterized by comprising the following steps:
step 1, preprocessing an original image set A;
the original image set A comprises N groups of brain MRI hippocampal image files in NIfTI format;
1.1 cropping images
Counting the positions and the areas of the sea horse bodies in the N groups of images, and cutting the image files in the original image set A into image files meeting the size requirement to obtain an image set B;
the specific requirements are as follows: an image with a picture size of 192X 160 is cut into X60:140];y[68:148];Z 1 [38:78];Z 2 [80:120]The method comprises the steps of carrying out a first treatment on the surface of the An image with a picture size of 256X 166 is cut into X [690:170];y[100:180];Z 1 [40:80];Z 2 [85:125]The method comprises the steps of carrying out a first treatment on the surface of the An image with a picture size of 256X 180 is cut into X [100:180 ]];y[105:185];Z 1 [48:88];Z 2 [90:130];
1.2 data normalization
Carrying out data normalization processing on the image set B to enable the range of voxel values in the image set B to be [0,1] to obtain a normalized image set C;
1.3 data serialization
The image set C is respectively serialized according to three directions of a coronal plane, a sagittal plane and a cross section to generate an anatomical plane atlas D, E, F under three groups of different views, and each group of anatomical plane atlas contains a slice sequence;
step 2, constructing a sea horse segmentation network model;
the hippocampal segmentation network model comprises an encoding part, a BDC-LSTM and a decoding part;
firstly, feature extraction is carried out on an anatomical plane atlas D, E, F through a coding part respectively and independently, then a result after feature extraction is sent to BDC-LSTM for training, a spatial sequence relation of continuous slices in the anatomical plane atlas is mined, and finally, the result after BDC-LSTM operation is up-sampled through a decoding part, so that end-to-end segmentation is realized, and a group of anatomical plane atlas is sent to a network for training each time;
the coding part is used for extracting the characteristics of the slices in the anatomical plane atlas D, E, F under three groups of different views; the coding part comprises four groups of convolution networks and a maximum pooling layer; the first group is a convolution layer of 3*3 with a channel number of 16; the second set uses three different convolutions to extract information of multiple scales, the first is a convolution of 1*1 for a channel number of 16, the second is a convolution of 3*3 for a channel number of 16, and the third is a convolution of 5*5 for a channel number of 16; the third group is a convolution layer of 3*3 with a channel number of 16; the fourth group is a convolution layer of 3*3 with a channel number of 16; after four groups of convolutions, connecting a maximum pooling layer;
the BDC-LSTM is of a two-layer CLSTM structure, one layer of CLSTM is forward in time sequence, and the other layer of CLSTM is reverse in time sequence;
the decoding part is used for up-sampling the output of the BDC-LSTM to obtain the same resolution as the input image;
the decoding part comprises a deconvolution layer of 3*3 with the channel number of 16 and a convolution layer of 3*3 with the channel number of 16, and finally a convolution layer of 3*3 with the channel number of 1 is connected;
step 3, training a model;
forward propagating the anatomical plane atlas D, E, F to obtain a single iteration result, and calculating a weight model J, K, L obtained by back propagating the loss function;
the anatomical plan under the three views is trained to obtain three sets of weight models J, K, L for hippocampal segmentation, and the weight models J, K, L are averaged to obtain a final training model M.
2. The sequence learning-based hippocampal segmentation method according to claim 1, wherein in step 2, the first, third and fourth groups of convolution layers of the coding part are followed by Batch-normalization, and the activation function is a Relu.
3. The sequence learning-based hippocampal segmentation method according to claim 1 or 2, wherein in step 2, the deconvolution layer of the decoding part and the 3*3 convolution layer with 16 channels are added, and the Batch-normalization is used as the activation function.
4. The sequence learning based hippocampal segmentation method according to claim 1 or 2, wherein the image file in the original image set a of step 1.1 is cropped to a size of 80 x 40.
5. The method of hippocampal segmentation based on sequence learning according to claim 3, wherein the image file in the original image set a of step 1.1, step 1.1 is cut to a size of 80 x 40.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811449294.0A CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811449294.0A CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109584244A CN109584244A (en) | 2019-04-05 |
CN109584244B true CN109584244B (en) | 2023-05-23 |
Family
ID=65923803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811449294.0A Active CN109584244B (en) | 2018-11-30 | 2018-11-30 | Hippocampus segmentation method based on sequence learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109584244B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287773A (en) * | 2019-05-14 | 2019-09-27 | 杭州电子科技大学 | Image recognition method for traffic hub security inspection based on autonomous learning |
CN110211140B (en) * | 2019-06-14 | 2023-04-07 | 重庆大学 | Abdominal Vessel Segmentation Method Based on 3D Residual U-Net and Weighted Loss Function |
CN110555847B (en) * | 2019-07-31 | 2021-04-02 | 瀚博半导体(上海)有限公司 | Image processing method and device based on convolutional neural network |
CN110414481A (en) * | 2019-08-09 | 2019-11-05 | 华东师范大学 | A 3D Medical Image Recognition and Segmentation Method Based on Unet and LSTM |
CN110969626B (en) * | 2019-11-27 | 2022-06-07 | 西南交通大学 | Hippocampus extraction method of human brain MRI based on 3D neural network |
CN111110228B (en) * | 2020-01-17 | 2023-04-18 | 武汉中旗生物医疗电子有限公司 | Electrocardiosignal R wave detection method and device |
CN113192150B (en) * | 2020-01-29 | 2022-03-15 | 上海交通大学 | Magnetic resonance interventional image reconstruction method based on cyclic neural network |
CN114202668A (en) * | 2020-08-31 | 2022-03-18 | 上海宽带技术及应用工程研究中心 | Method, system, medium, and apparatus for processing clinical data of alzheimer's disease |
CN112508953B (en) * | 2021-02-05 | 2021-05-18 | 四川大学 | A rapid segmentation and qualitative method of meningioma based on deep neural network |
CN114549417A (en) * | 2022-01-20 | 2022-05-27 | 高欣 | Abdominal fat quantification method based on deep learning and nuclear magnetic resonance Dixon |
CN116681705B (en) * | 2023-08-04 | 2023-09-29 | 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) | Surface morphology measurement method and processing equipment based on longitudinal structure of human brain hippocampus |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920243A (en) * | 2017-03-09 | 2017-07-04 | 桂林电子科技大学 | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks |
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10592820B2 (en) * | 2016-06-09 | 2020-03-17 | International Business Machines Corporation | Sequential learning technique for medical image segmentation |
CN106157307B (en) * | 2016-06-27 | 2018-09-11 | 浙江工商大学 | A kind of monocular image depth estimation method based on multiple dimensioned CNN and continuous CRF |
CN107292346B (en) * | 2017-07-05 | 2019-11-15 | 四川大学 | A Segmentation Algorithm for Hippocampus in MR Images Based on Local Subspace Learning |
CN108154194B (en) * | 2018-01-18 | 2021-04-30 | 北京工业大学 | Method for extracting high-dimensional features by using tensor-based convolutional network |
-
2018
- 2018-11-30 CN CN201811449294.0A patent/CN109584244B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106920243A (en) * | 2017-03-09 | 2017-07-04 | 桂林电子科技大学 | The ceramic material part method for sequence image segmentation of improved full convolutional neural networks |
CN108062753A (en) * | 2017-12-29 | 2018-05-22 | 重庆理工大学 | The adaptive brain tumor semantic segmentation method in unsupervised domain based on depth confrontation study |
CN108427920A (en) * | 2018-02-26 | 2018-08-21 | 杭州电子科技大学 | A kind of land and sea border defense object detection method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN109584244A (en) | 2019-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109584244B (en) | Hippocampus segmentation method based on sequence learning | |
CN112465827B (en) | Contour perception multi-organ segmentation network construction method based on class-by-class convolution operation | |
CN111311592B (en) | An automatic segmentation method for 3D medical images based on deep learning | |
CN109035255B (en) | A segmentation method of aorta with dissection in CT images based on convolutional neural network | |
Ye et al. | Multi-depth fusion network for whole-heart CT image segmentation | |
CN113436211B (en) | A deep learning-based active contour segmentation method for medical images | |
CN111192245A (en) | A brain tumor segmentation network and segmentation method based on U-Net network | |
CN111696126B (en) | A multi-view and multi-task liver tumor image segmentation method | |
CN110853038A (en) | A DN-U-net network method for liver tumor CT image segmentation technology | |
Wang et al. | CLCU-Net: cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation | |
Hui et al. | A partitioning-stacking prediction fusion network based on an improved attention U-Net for stroke lesion segmentation | |
CN113436173B (en) | Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception | |
CN111476796A (en) | A semi-supervised coronary artery segmentation system and segmentation method combining multiple networks | |
CN112288041B (en) | A Feature Fusion Method for Multimodal Deep Neural Networks | |
CN114494296A (en) | Brain glioma segmentation method and system based on fusion of Unet and Transformer | |
CN109801268B (en) | CT radiography image renal artery segmentation method based on three-dimensional convolution neural network | |
CN111275712A (en) | A Residual Semantic Network Training Method for Large-scale Image Data | |
CN111179269A (en) | PET image segmentation method based on multi-view and 3-dimensional convolution fusion strategy | |
CN110942464A (en) | PET image segmentation method fusing 2-dimensional and 3-dimensional models | |
CN114266939A (en) | Brain extraction method based on ResTLU-Net model | |
CN117876370B (en) | CT image kidney tumor segmentation system based on three-dimensional axial transducer model | |
CN114648541A (en) | Automatic segmentation method for non-small cell lung cancer gross tumor target area | |
CN116934965A (en) | Cerebral blood vessel three-dimensional image generation method and system based on controllable generation and diffusion model | |
Qiu et al. | A deep learning approach for segmentation, classification, and visualization of 3-D high-frequency ultrasound images of mouse embryos | |
Sengun et al. | Automatic liver segmentation from CT images using deep learning algorithms: a comparative study |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230418 Address after: 214000 Room 709-G02, Building 13, Hongxing Daduhui, Wuxi Economic Development Zone, Wuxi City, Jiangsu Province Applicant after: Wuxi Bencio Intelligent Technology Co.,Ltd. Address before: 241000 A11-2, Phase I, Science and Technology Industrial Park, Yijiang District, Wuhu City, Anhui Province Applicant before: ANHUI HAILING INTELLIGENT TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |