CN109559359B - Artifact removal method for reconstructed image from sparse angle data based on deep learning - Google Patents
Artifact removal method for reconstructed image from sparse angle data based on deep learning Download PDFInfo
- Publication number
- CN109559359B CN109559359B CN201811137448.2A CN201811137448A CN109559359B CN 109559359 B CN109559359 B CN 109559359B CN 201811137448 A CN201811137448 A CN 201811137448A CN 109559359 B CN109559359 B CN 109559359B
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- data
- convolution
- sparse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000013135 deep learning Methods 0.000 title claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 claims abstract description 68
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000012360 testing method Methods 0.000 claims abstract description 25
- 238000003062 neural network model Methods 0.000 claims abstract description 5
- 230000008030 elimination Effects 0.000 claims description 38
- 238000003379 elimination reaction Methods 0.000 claims description 38
- 230000006870 function Effects 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000002759 z-score normalization Methods 0.000 claims description 4
- 238000011160 research Methods 0.000 abstract description 7
- 238000007405 data analysis Methods 0.000 abstract description 3
- 238000004458 analytical method Methods 0.000 abstract 1
- 239000010410 layer Substances 0.000 description 94
- 238000010586 diagram Methods 0.000 description 7
- 238000002591 computed tomography Methods 0.000 description 5
- 239000006185 dispersion Substances 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004880 explosion Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 208000032839 leukemia Diseases 0.000 description 1
- 230000006371 metabolic abnormality Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法。The invention relates to a method for removing artifacts of images reconstructed from sparse angle data based on deep learning.
背景技术Background technique
电子计算机断层扫描(Computed tomography,CT)经过三十多年的发展,其成像速度和成像质量都有了长足的进步,随着CT扫描的应用越来越广泛,人们对于CT扫描对人体的辐射越来越重视,医学研究表明,X射线照射可能诱发新陈代谢异常、癌症、白血病等疾病。常规CT检查中,病人所受辐射剂量已经到了不可忽视的地步了。然而CT扫描剂量的降低往往伴随着成像质量的下降,当下的研究热点就是在尽量降低CT扫描剂量的同时保证成像的质量。After more than 30 years of development, computerized tomography (CT) has made great progress in imaging speed and imaging quality. With the increasing application of CT scanning, people are concerned about the radiation of CT scanning More and more attention has been paid to medical research showing that X-ray exposure may induce metabolic abnormalities, cancer, leukemia and other diseases. In routine CT examinations, the radiation dose received by patients has reached a level that cannot be ignored. However, the reduction of CT scanning dose is often accompanied by the decline of imaging quality. The current research hotspot is to reduce the CT scanning dose as much as possible while ensuring the quality of imaging.
稀疏角度投影采集主要是通过降低CT扫描时间来实现降低扫描剂量的目的,但其解析重建会带来严重的条状伪影,根据CT投影数据的冗余性,在稀疏采样条件下可以对缺失数据进行修补估计得到完整的投影数据集,并以此重建,在此思路下,Li等利用字典学习方法估计遗失的投影数据,并进行重建,取得了比TV方法好的结果。zhang等利用S2etram的方向差值方法对投影数据进行差值去获取投影数据空间的缺失数据,然后进行相关重建,该方法对于传统的线性差值相比取得了较好的实验效果。基于压缩感知(Compressedsensing,CS)的办法中,Rudin针对投影数据中的高斯噪声采用全变分方法去除,针对呈泊松分布的噪声则采用改进的全变分方法。Ma等利用非局部均值的局部平滑作为约束,获取了比TV较好的结果。Hu等提出了一种渐进的2层迭代模型,将零范数先验约束引入三维CT重建,在高度稀疏的条件下获得了远优于一范数先验约束的结果。此外迭代重建算法也是稀疏角度投影的一个解决方案,但相对于解析类重建算法,其过大的计算量限制了其广泛应用。Sparse angle projection acquisition mainly achieves the purpose of reducing the scanning dose by reducing the CT scanning time, but its analytical reconstruction will bring serious strip artifacts. According to the redundancy of CT projection data, under the condition of sparse sampling, the missing The data is repaired and estimated to obtain a complete projection data set, and then reconstructed. Under this idea, Li et al. used the dictionary learning method to estimate the missing projection data and reconstructed it, and achieved better results than the TV method. Zhang et al. used S2etram's directional difference method to make a difference on the projection data to obtain missing data in the projection data space, and then perform correlation reconstruction. Compared with the traditional linear difference, this method has achieved better experimental results. In the approach based on Compressed Sensing (CS), Rudin uses a total variation method to remove the Gaussian noise in the projection data, and uses an improved total variation method to remove the Poisson-distributed noise. Ma et al. used the local smoothness of the non-local mean as a constraint and obtained better results than TV. proposed a progressive 2-layer iterative model, which introduced the zero-norm prior constraint into 3D CT reconstruction, and obtained results far superior to the one-norm prior constraint under highly sparse conditions. In addition, the iterative reconstruction algorithm is also a solution to the sparse angle projection, but compared with the analytical reconstruction algorithm, its large amount of calculation limits its wide application.
基于解析重建的现存各类方法中,仍存在如下问题:Among the existing methods based on analytical reconstruction, there are still the following problems:
一、基于解析重建的现存各类办法无法彻底有效抑制结构性的大尺度条状伪影,迭代重建方法耗时过长。1. Existing methods based on analytical reconstruction cannot completely and effectively suppress structural large-scale stripe artifacts, and iterative reconstruction methods take too long.
二、在传统非resnet类型的神经网络中,所出现的梯度爆炸与梯度消失问题。一般而言网络越深,网络的表达能力就越强,但是在实际运用中,当神经网络层数超过一定数目,就会出现网络的表达能力不再增加,甚至是不如较浅的网络。这是因为网络的梯度更新规则是依据链式法则,前面层的梯度来自于后面所有层梯度的乘积,这样当层数增大之后,就会使得前面层的梯度变得非常小或者非常大,参见0.9^100≈0.00002,1.1^100≈13781。这样训练的时候就会出现前面层更新幅度太小或者是幅度太大导致最终无法收敛。Second, in the traditional non-resnet type of neural network, the problem of gradient explosion and gradient disappearance. Generally speaking, the deeper the network, the stronger the expressive ability of the network. However, in practical applications, when the number of neural network layers exceeds a certain number, the expressive ability of the network will no longer increase, or even be inferior to shallower networks. This is because the gradient update rule of the network is based on the chain rule. The gradient of the front layer comes from the product of the gradients of all the layers behind. In this way, when the number of layers increases, the gradient of the front layer will become very small or very large. See 0.9^100≈0.00002, 1.1^100≈13781. In this way, during training, the update range of the previous layer will be too small or too large to eventually converge.
三、Inception-resnet-v2网络的整体感受野仍然不够大,无法完全捕捉条状伪影的特征。3. The overall receptive field of the Inception-resnet-v2 network is still not large enough to fully capture the characteristics of strip artifacts.
四、传统的神经网络带有全连接层,其会导致网络参数数目巨大,训练耗时很长,且容易过拟合。4. The traditional neural network has a fully connected layer, which will lead to a huge number of network parameters, long training time, and easy overfitting.
因此将目前在计算机视觉上表现突出的深度学习办法引入到对于稀疏角度解析重建图像伪影的去除研究中具有十分重要的意义。Therefore, it is of great significance to introduce the deep learning method, which is currently outstanding in computer vision, into the research on the removal of image artifacts in sparse angle-analytic reconstruction.
发明内容Contents of the invention
本发明的目的是提供一种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法解决现有技术中存在的目前解析重建类办法无法有效抑制大尺度条状伪影的问题。The purpose of the present invention is to provide an artifact removal method based on deep learning to reconstruct images from sparse angle data to solve the problem that the current analytical reconstruction methods in the prior art cannot effectively suppress large-scale stripe artifacts.
本发明中,Inception表示谷歌的Inception网络基本模块,Inception-resnet表示谷歌结合残差网络以及Inception网络设计出来的新的基本网络模块,具体结构如图2所示。In the present invention, Inception represents the basic module of Google's Inception network, and Inception-resnet represents a new basic network module designed by Google in combination with the residual network and the Inception network. The specific structure is shown in FIG. 2 .
本发明中,FDK算法为CT图像重建领域里非常经典的三维锥束CT重建算法,三位创始人名字的大写首字母组合而成其名。In the present invention, the FDK algorithm is a very classic three-dimensional cone-beam CT reconstruction algorithm in the field of CT image reconstruction, and its name is formed by combining the initials of the names of the three founders.
本发明的技术解决方案是:Technical solution of the present invention is:
一种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法,基于深度学习构建一个结合全卷积网络和Inception-resnet网络模块的稀疏角度投影重建图像伪影消除神经网络,针对稀疏角度CT图像的去伪影以及噪声进行预测并去除之,具体包括以下步骤,A method for removing artifacts from reconstructed images from sparse angle data based on deep learning. Based on deep learning, a neural network for removing artifacts from sparse angle projection reconstructed images that combines full convolutional networks and Inception-resnet network modules is constructed for sparse angle CT. Image de-artifacts and noise are predicted and removed, specifically including the following steps,
S1、采集若干对投影数据,每对投影数据包括稀疏角度投影数据与完备角度投影数据;S1. Collect several pairs of projection data, each pair of projection data includes sparse angle projection data and complete angle projection data;
S2、利用步骤S1采集的稀疏角度投影数据和完备投影数据分别进行重建,将重建后的数据集patchi(i=1,2,3,4,5…)进行保存,每对数据集包括稀疏角度投影重建数据集和完备投影重建数据集;S2. Use the sparse angle projection data and complete projection data collected in step S1 to reconstruct respectively, and save the reconstructed data set patchi (i=1, 2, 3, 4, 5...), each pair of data sets includes sparse angles Projection reconstruction datasets and complete projection reconstruction datasets;
S3、构建稀疏角度投影重建图像伪影消除神经网络;S3. Constructing a sparse angle projection reconstruction image artifact elimination neural network;
S4、利用步骤S2中重建的图像作为训练数据集,完备投影重建数据集做对照数据,对步骤S3中构建的稀疏角度投影重建图像伪影消除神经网络进行训练,保存训练效果较好的模型,记为模型model;S4, using the image reconstructed in step S2 as the training data set, complete the projection reconstruction data set as comparison data, train the sparse angle projection reconstruction image artifact elimination neural network constructed in step S3, and save the model with better training effect, Recorded as model model;
S5、获取步骤S4所保存的训练好的稀疏角度投影重建图像伪影消除神经网络模型model,并将测试图像test_img投入其中,最终预测并保存该测试图像test_img和对应的伪影图noise_img;S5. Obtain the trained sparse angle projection reconstruction image artifact elimination neural network model model saved in step S4, and put the test image test_img into it, and finally predict and save the test image test_img and the corresponding artifact map noise_img;
S6、利用步骤S5中保存的测试图像test_img减去步骤S5中所得到的伪影图noise_img,即可得到清晰图像clean_img。S6. Using the test image test_img saved in step S5 to subtract the artifact image noise_img obtained in step S5, a clear image clean_img can be obtained.
进一步地,步骤S1中,采集的稀疏角度投影数据的采集角度间隔均不小于4°且不大于8°,完备角度投影数据采集角度间隔不大于0.5°,即数据采集的组数均是处于小于90组大于45组的范围内。Further, in step S1, the collection angle intervals of the collected sparse angle projection data are not less than 4° and not more than 8°, and the collection angle intervals of the complete angle projection data are not more than 0.5°, that is, the number of groups of data collection is less than 90 groups are greater than the range of 45 groups.
进一步地,步骤S2中采用FDK重建算法对步骤S1采集的稀疏角度投影数据和完备投影数据分别进行重建,具体为,Further, in step S2, the FDK reconstruction algorithm is used to reconstruct the sparse angle projection data and the complete projection data collected in step S1, specifically,
S21、对二维投影矩阵即步骤S1中所采集投影数据计算加权因子并相乘,进行修正使其满足中心切片定理,得到修正后的投影数据;S21. Calculating and multiplying the weighting factors of the two-dimensional projection matrix, that is, the projection data collected in step S1, and correcting it so that it satisfies the central slice theorem, and obtaining the corrected projection data;
S22、对步骤S21修正后的投影数据采用斜坡滤波方法做逐行滤波;S22. Perform line-by-line filtering on the projection data corrected in step S21 using a slope filtering method;
S23、对步骤S22滤波后的数据作反投影重构出图像。S23. Reconstructing an image by back-projecting the filtered data in step S22.
进一步地,步骤S3中构建稀疏角度投影重建图像伪影消除神经网络具体结构为:Further, in step S3, the specific structure of the neural network for constructing sparse angle projection reconstructed image artifacts is as follows:
卷积->BN层->relu->卷积->BN层->relu->Resblock->Res_inception_block1->Res_inception_block2->Res_inception_block3->BN层->relu->卷积;Convolution->BN layer->relu->convolution->BN layer->relu->Resblock->Res_inception_block1->Res_inception_block2->Res_inception_block3->BN layer->relu->convolution;
其中,BN层为即批量归一化,在神经网络训练的时候对稀疏角度投影重建图像伪影消除神经网络中间层的输出数据进行Z-score归一化处理,即减均值除方差;relu为稀疏角度投影重建图像伪影消除神经网络采用的激活函数,其表达式为relu(x)=max(0,x);concat表示矩阵拼接层,即将多个分支的输入在某一指定的维度拼接在一起作为一个整体输入到下一层;Resblock表示的是一个残差网络的基本模块;Res_inception_block1、Res_inception_block2、Res_inception_block 3分别表示一个Inception_resnet的基本模块,数字不同表示不同的内部结构;Among them, the BN layer is batch normalization. During the training of the neural network, Z-score normalization is performed on the output data of the sparse angle projection reconstruction image artifact elimination neural network middle layer, that is, subtracting the mean and dividing the variance; relu is The activation function used by sparse angle projection reconstruction image artifact elimination neural network, its expression is relu(x)=max(0,x); concat represents the matrix splicing layer, that is, splicing the input of multiple branches in a specified dimension Input to the next layer together as a whole; Resblock represents the basic module of a residual network; Res_inception_block1, Res_inception_block2,
进一步地,步骤S3中,Further, in step S3,
所述Resblock的正常连接路径:输入->卷积->BN层->relu->卷积->sum层,然后其多出一个跨层连接,直接从输入连接到sum层,与正常连接路径的输出相加,作为下一层的输入;The normal connection path of the Resblock: input->convolution->BN layer->relu->convolution->sum layer, and then it has an extra cross-layer connection, directly connected from the input to the sum layer, and the normal connection path The output of the sum is used as the input of the next layer;
所述Res_inception_block1的第一条连接路径:输入->卷积层->concat层即通道拼接层;第二条连接路径:输入->卷积层->BN层->relu->卷积->BN层->relu->卷积->concat层;第三条路径:输入->卷积层->BN层->relu->卷积->BN层->relu->卷积->concat层;然后将上述三条路径经过concat层拼接之后的输出与跨层连接路径的输入经过sum层相加,其结果作为下一层的输入;The first connection path of the Res_inception_block1: input->convolution layer->concat layer is the channel splicing layer; the second connection path: input->convolution layer->BN layer->relu->convolution-> BN layer->relu->convolution->concat layer; the third path: input->convolution layer->BN layer->relu->convolution->BN layer->relu->convolution->concat layer; then the output of the above three paths after concat layer splicing and the input of the cross-layer connection path are added through the sum layer, and the result is used as the input of the next layer;
所述Res_inception_block2的第一条连接路径:输入->卷积->concat层;第二条连接路径:输入->卷积->BN层->relu->卷积->BN层->relu->卷积->concat层;然后将上述两条路径经过concat层拼接之后的输出与跨层连接路径的输入在sum层相加,其结果作为下一层的输入;The first connection path of the Res_inception_block2: input->convolution->concat layer; the second connection path: input->convolution->BN layer->relu->convolution->BN layer->relu- >Convolution->concat layer; then add the output of the above two paths through the concat layer to the input of the cross-layer connection path in the sum layer, and the result is used as the input of the next layer;
所述Res_inception_block3的第一条连接路径:输入->卷积->concat层;第二条连接路径:输入->卷积->BN层->relu->卷积->BN层->relu->卷积;然后将上述两条路径经过concat层拼接之后的输出与跨层连接路径的输入在sum层下相加,其结果作为下一层的输入。The first connection path of the Res_inception_block3: input->convolution->concat layer; the second connection path: input->convolution->BN layer->relu->convolution->BN layer->relu- > Convolution; then add the output of the above two paths after concat layer splicing and the input of the cross-layer connection path under the sum layer, and the result is used as the input of the next layer.
进一步地,步骤S4具体为,Further, step S4 is specifically,
S41、对步骤S2中重建后的数据集patchi(i=1,2,3,4,5…),进行分块化处理,再投入稀疏角度投影重建图像伪影消除神经网络;S41. Carry out block processing on the reconstructed data set patchi (i=1, 2, 3, 4, 5...) in step S2, and then invest in the sparse angle projection reconstructed image artifact elimination neural network;
S42、在将数据集patchi(i=1,2,3,4,5…)投入稀疏角度投影重建图像伪影消除神经网络之前对数据进行归一化处理,即将步骤S2中所得到的训练样本进行归一化,即其中mean为patchi的均值,std为其标准差;将patchi*投入稀疏角度投影重建图像伪影消除神经网络进行训练;并将其投入步骤S3中所构建的稀疏角度投影重建图像伪影消除神经网络进行训练,训练包括前向传播与反向传播;最终保存训练得到的模型model,供给步骤S5测试使用。S42. Normalize the data before putting the data set patchi (i=1,2,3,4,5...) into the sparse angle projection reconstruction image artifact elimination neural network, that is, the training samples obtained in step S2 to normalize, that is Wherein, mean is the mean value of patchi, and std is its standard deviation; put patchi* into the sparse angle projection reconstruction image artifact elimination neural network for training; and put it into the sparse angle projection reconstruction image artifact elimination neural network constructed in step S3 Carry out training, the training includes forward propagation and back propagation; finally save the model model obtained from the training, and provide it for testing in step S5.
进一步地,步骤S42中,稀疏角度投影重建图像伪影消除神经网络训练过程中具体是采取均方误差mse作为训练最后的损失函数来对网络参数进行校正,表达式如下:Further, in step S42, the mean square error mse is used as the final loss function of the training to correct the network parameters during the training process of the sparse angle projection reconstruction image artifact removal neural network, and the expression is as follows:
其中,n是每一批投入数据量Batch-size的大小,w、h分别是每个样本的长和宽,D是减去稀疏角度投影重建图像伪影消除神经网络预测伪影以及噪声之后的效果图clean_img,X是对比数据,即步骤S2中所重建的稀疏角度投影重建数据集与完备角度投影重建数据集的差值图像;训练过程中,稀疏角度投影重建图像伪影消除神经网络根据损失函数的数值,借助反向传播算法和Adam优化算法逐层计算更新量来逐次更新网络的参数以优化模型model。Among them, n is the Batch-size of each batch of input data, w and h are the length and width of each sample respectively, and D is the image after subtracting sparse angle projection reconstruction image artifacts to eliminate neural network prediction artifacts and noise The effect image clean_img, X is the comparison data, that is, the difference image between the sparse angle projection reconstruction data set and the complete angle projection reconstruction data set reconstructed in step S2; during the training process, the sparse angle projection reconstruction image artifacts are eliminated by the neural network according to the loss The value of the function, using the backpropagation algorithm and the Adam optimization algorithm to calculate the update amount layer by layer to update the parameters of the network successively to optimize the model model.
进一步地,步骤S5具体为,Further, step S5 is specifically,
S51、保存步骤S4中训练得到的模型model;S51. Save the model model trained in step S4;
S52、将测试图像test_img投入模型model之中进行伪影图noise_img预测,并保存预测的伪影图noise_img。S52. Put the test image test_img into the model model to predict the noise image noise_img, and save the predicted noise image noise_img.
进一步地,步骤S6具体为,Further, step S6 is specifically,
S61、获取步骤S5中预测的伪影图noise_img以及测试图像test_img。S61. Obtain the predicted noise image noise_img and test image test_img in step S5.
S62、用测试图像test_img减去伪影图noise_img,即可得到所需的清晰图像clean_img。S62. Subtract the artifact image noise_img from the test image test_img to obtain the required clear image clean_img.
本发明的有益效果是:The beneficial effects of the present invention are:
一、该种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法,将目前在计算机视觉上表现突出的深度学习办法引入到稀疏角度投影解析重建图像的伪影去除研究中,利用Inception-resnet网络的特点,可以构建一个表达能力精细且多样的神经网络,适用于稀疏角度数据解析重建图像的伪影去除;1. This method of removing artifacts from reconstructed images based on sparse angle data based on deep learning introduces the current deep learning method, which is outstanding in computer vision, into the research on artifact removal of sparse angle projection analytically reconstructed images, using Inception- The characteristics of the resnet network can build a neural network with fine and diverse expressive capabilities, which is suitable for artifact removal of sparse angle data analysis and reconstruction images;
二、本发明方法在Inception-resnet网络的整体结构上,使用resnet来代替原本Inception-resnet中单纯的卷积层,用以构建更深层的神经网络,使得网络的整体感受野更大,更符合条状伪影全局分布的特征。2. In the overall structure of the Inception-resnet network, the method of the present invention uses resnet to replace the simple convolution layer in the original Inception-resnet to construct a deeper neural network, so that the overall receptive field of the network is larger and more in line with Characterization of the global distribution of streaking artifacts.
三、本发明使用FCN全卷积网络的模式,即仅使用卷积层而不使用全连接层,由于卷积的权值共享、局部连接等特点,网络的参数大大减少,极大的减少了训练时间,也提高了抗过拟合能力,且依靠FCN能更容易设计图像到图像的网络,省却很多麻烦。3. The present invention uses the FCN full convolutional network mode, that is, only the convolutional layer is used instead of the fully connected layer. Due to the characteristics of convolutional weight sharing and local connection, the parameters of the network are greatly reduced, and the network parameters are greatly reduced. The training time also improves the anti-overfitting ability, and it is easier to design an image-to-image network by relying on FCN, saving a lot of trouble.
附图说明Description of drawings
图1是本发明实施例基于深度学习实现的稀疏角度数据重建图像的伪影去除方法的流程示意图。FIG. 1 is a schematic flowchart of a method for removing artifacts of an image reconstructed from sparse angle data based on deep learning according to an embodiment of the present invention.
图2是实施例中Inception-resnet的基本模块的结构示意图。Fig. 2 is a schematic structural diagram of the basic module of Inception-resnet in the embodiment.
图3是实施例中所使用的稀疏角度投影重建图像伪影消除神经网络的基本结构示意图。Fig. 3 is a schematic diagram of the basic structure of the neural network used in the embodiment for sparse angle projection reconstructed image artifact removal.
图4是实施例中对应的Resblock模块之结构示意图。Fig. 4 is a schematic structural diagram of the corresponding Resblock module in the embodiment.
图5是图3中对应的Res_inception_block1基本结构示意图。FIG. 5 is a schematic diagram of the basic structure of Res_inception_block1 corresponding to FIG. 3 .
图6是图3中对应的Res_inception_block2基本结构示意图。FIG. 6 is a schematic diagram of the basic structure of Res_inception_block2 corresponding to FIG. 3 .
图7是图3中对应的Res_inception_block3基本结构示意图。FIG. 7 is a schematic diagram of the basic structure of the corresponding Res_inception_block3 in FIG. 3 .
图8是实施例中卷积方式的示意图。Fig. 8 is a schematic diagram of the convolution method in the embodiment.
其中,Input表示输入,Output表示输出;Among them, Input means input, and Output means output;
Inception表示谷歌的Inception网络基本模块;Inception represents the basic module of Google's Inception network;
Inception-resnet表示谷歌结合残差网络(resnet)以及Inception网络设计出来的新的基本网络模块;Inception-resnet represents a new basic network module designed by Google in combination with residual network (resnet) and Inception network;
Conv 1*1表示其是一个卷积层,卷积核尺寸为1*1;Conv 1*3表示其是一个卷积层,卷积核尺寸为1*3;Conv 3*1表示其是一个卷积层,卷积核尺寸为3*1;Conv 3*3表示其是一个卷积层,卷积核尺寸为3*3;Conv 1*7表示其是一个卷积层,卷积核尺寸为1*7;Conv7*1表示其是一个卷积层,卷积核尺寸为7*1;
Relu为修正线性单元,表示稀疏角度投影重建图像伪影消除神经网络使用的一种激活函数;Relu is a modified linear unit, which represents an activation function used by the sparse angle projection reconstruction image artifact removal neural network;
BN层表示的是批量归一化层(Batch normalization);The BN layer represents the batch normalization layer (Batch normalization);
Filter concat表示矩阵拼接层,即将多个分支的输入在某一指定的维度拼接在一起作为一个整体输入到下一层;Filter concat represents the matrix splicing layer, that is, the input of multiple branches is spliced together in a specified dimension and input to the next layer as a whole;
Resblock表示的是一个残差网络的基本模块;Resblock represents the basic module of a residual network;
Res_inception_block1、Res_inception_block 2、Res_inception_block 3分别表示一个Inception_resnet的基本模块,数字不同表示不同的内部结构(一般而言是卷积的组合方式不同)。Res_inception_block1, Res_inception_block 2, and
具体实施方式Detailed ways
下面结合附图详细说明本发明的优选实施例。Preferred embodiments of the present invention will be described in detail below in conjunction with the accompanying drawings.
实施例Example
一种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法,通过采集稀疏角度投影数据与完备角度投影数据;利用稀疏角度投影数据和完备投影数据分别进行重建;构建稀疏角度投影重建图像伪影消除神经网络;利用重建的图像作为训练数据;保存训练好的模型,并将测试图像投入其中;利用测试图像减去步骤E中所得到的稀疏角度投影重建图像伪影消除神经网络预测之噪声,即可得到清晰图像。将目前在计算机视觉上表现突出的深度学习办法引入到稀疏角度投影解析重建图像的伪影去除研究中,利用Inception-resnet网络的特点,构建一个表达能力精细且多样的稀疏角度投影重建图像伪影消除神经网络,适用于稀疏角度数据解析重建图像的伪影去除。A method for removing artifacts from reconstructed images from sparse angle data based on deep learning, by collecting sparse angle projection data and complete angle projection data; using sparse angle projection data and complete projection data to reconstruct respectively; constructing sparse angle projection reconstruction image artifacts Shadow elimination neural network; use the reconstructed image as training data; save the trained model and put the test image into it; use the test image to subtract the sparse angle projection obtained in step E to reconstruct image artifacts and eliminate the noise predicted by the neural network , a clear image can be obtained. Introduce the deep learning method, which is currently outstanding in computer vision, into the research on the artifact removal of sparse angle projection analytically reconstructed images, and use the characteristics of the Inception-resnet network to construct a sparse angle projection reconstruction image artifact with fine expressiveness and variety Elimination of neural networks, suitable for artifact removal in images reconstructed from sparse angle data analysis.
实施例的一种基于深度学习实现的稀疏角度数据重建图像的伪影去除方法,基于深度学习,构建一个结合全卷积网络与Inception-resnet的稀疏角度投影重建图像伪影消除神经网络,针对稀疏角度CT图像的去伪影以及噪声进行预测并去除之;如图1,具体包括以下步骤:A method for removing artifacts of sparse angle data reconstruction images based on deep learning in the embodiment. Based on deep learning, construct a sparse angle projection reconstruction image artifact removal neural network combining full convolution network and Inception-resnet, aiming at sparse The anti-artifact and noise of the angle CT image are predicted and removed; as shown in Figure 1, the following steps are specifically included:
S1、采集五对投影数据Ai(i=1,2,3,4,5),每对内包含一组稀疏角度投影数据Ai_low和一组完备角度投影数据Ai_high。S1. Five pairs of projection data Ai (i=1, 2, 3, 4, 5) are collected, each pair includes a set of sparse angle projection data Ai_low and a set of complete angle projection data Ai_high.
步骤S1中采集的稀疏角度投影数据的采集角度间隔均不小于4°且不大于8°,完备角度投影数据采集角度间隔不大于0.5°,即数据采集的组数均是处于小于90组大于45组的范围内。The collection angle interval of the sparse angle projection data collected in step S1 is not less than 4° and not more than 8°, and the collection angle interval of the complete angle projection data is not more than 0.5°, that is, the number of data collection groups is less than 90 and greater than 45 within the scope of the group.
S2、利用步骤S1中采集的五对投影数据Ai进行重建,将重建后的数据集保存为五对S2i(i=1,2,3,4,5),每对数据集分为稀疏角度投影重建数据集S2i_low和完备角度投影重建数据集S2i_high,每对数据集中单层图像尺寸均为512*512。S2. Use the five pairs of projection data Ai collected in step S1 to reconstruct, and save the reconstructed data set as five pairs S2i (i=1, 2, 3, 4, 5), and each pair of data sets is divided into sparse angle projections The reconstruction data set S2i_low and the complete angle projection reconstruction data set S2i_high, the single-layer image size in each pair of data sets is 512*512.
步骤S2中重建利用的是解析重建算法中经典的FDK算法,具体为,The reconstruction in step S2 utilizes the classic FDK algorithm in the analytical reconstruction algorithm, specifically,
S21、对二维投影矩阵计算加权因子并相乘,进行修正使其大致满足中心切片定理。S21. Calculating and multiplying weighting factors for the two-dimensional projection matrix, and modifying them to roughly satisfy the central slice theorem.
S22、对修正后的投影数据做逐行滤波,其中采用的是斜坡滤波方式。S22. Perform line-by-line filtering on the corrected projection data, wherein a slope filtering method is adopted.
S23、对滤波后的数据作反投影重构出图像。S23. Perform back projection on the filtered data to reconstruct an image.
S3、构建稀疏角度投影重建图像伪影消除神经网络。S3. Constructing a sparse angle projection reconstructed image artifact elimination neural network.
S31、所构建的稀疏角度投影重建图像伪影消除神经网络整体结构具体结构如附图3S31. The overall structure of the constructed sparse angle projection reconstructed image artifact elimination neural network is shown in Figure 3.
卷积->BN层(batch-normalization)层->relu->卷积->BN层->relu->Resblock->Res_inception_block1->Res_inception_block2->Res_inception_block3->BN层->relu->卷积。Convolution->BN layer (batch-normalization) layer->relu->convolution->BN layer->relu->Resblock->Res_inception_block1->Res_inception_block2->Res_inception_block3->BN layer->relu->convolution.
其中,BN层为即批量归一化,在神经网络训练的时候对稀疏角度投影重建图像伪影消除神经网络中间层的输出数据进行Z-score归一化处理,即减均值除方差;relu为稀疏角度投影重建图像伪影消除神经网络采用的激活函数,其表达式为relu(x)=max(0,x);concat表示矩阵拼接层,即将多个分支的输入在某一指定的维度拼接在一起作为一个整体输入到下一层;Resblock表示的是一个残差网络的基本模块;Res_inception_block1、Res_inception_block 2、Res_inceptio n_block 3分别表示一个Inception_resnet的基本模块,数字不同表示不同的内部结构;Among them, the BN layer is batch normalization. During the training of the neural network, Z-score normalization is performed on the output data of the sparse angle projection reconstruction image artifact elimination neural network middle layer, that is, subtracting the mean and dividing the variance; relu is The activation function used by sparse angle projection reconstruction image artifact elimination neural network, its expression is relu(x)=max(0,x); concat represents the matrix splicing layer, that is, splicing the input of multiple branches in a specified dimension Input to the next layer together as a whole; Resblock represents the basic module of a residual network; Res_inception_block1, Res_inception_block 2, and
S311、步骤S31中所述Resblock如附图4所示,正常连接路径:输入->卷积->BN层->relu->卷积->sum层,然后其相较于普通的卷积网络多出一个跨层连接,直接从输入连接到sum层,与正常连接路径的输出相加,作为下一层的输入。S311, the Resblock described in step S31 is shown in Figure 4, the normal connection path: input->convolution->BN layer->relu->convolution->sum layer, and then compared with the ordinary convolutional network An additional cross-layer connection is directly connected from the input to the sum layer, which is added to the output of the normal connection path as the input of the next layer.
S312、步骤S31中所述Res_inception_block1具体结构如图5所示,第一条连接路径:输入->卷积层->Filter concat层(矩阵拼接层)。第二条连接路径:输入->卷积层->BN层->relu->卷积->BN层->relu->卷积->concat层。第三条路径:输入->卷积层->BN层->relu->卷积->BN层->relu->卷积->concat层。然后将上述三条路径经过concat层拼接之后的输出与跨层连接路径的输入经过sum层相加,其结果作为下一层的输入。S312, the specific structure of Res_inception_block1 described in step S31 is shown in Figure 5, the first connection path: input -> convolution layer -> Filter concat layer (matrix splicing layer). The second connection path: input->convolution layer->BN layer->relu->convolution->BN layer->relu->convolution->concat layer. The third path: input->convolution layer->BN layer->relu->convolution->BN layer->relu->convolution->concat layer. Then, the output of the above three paths spliced by the concat layer and the input of the cross-layer connection path are added through the sum layer, and the result is used as the input of the next layer.
S313、步骤S31中所述Res_inception_block2具体结构如图6所示,第一条连接路径:输入->卷积->concat层。第二条连接路径:输入->卷积->BN层->relu->卷积->BN层->relu->卷积->concat层。然后将上述两条路径经过concat层拼接之后的输出与跨层连接路径的输入在sum层相加,其结果作为下一层的输入。S313. The specific structure of Res_inception_block2 described in step S31 is shown in FIG. 6, the first connection path: input->convolution->concat layer. The second connection path: input->convolution->BN layer->relu->convolution->BN layer->relu->convolution->concat layer. Then, the output of the above two paths spliced by the concat layer is added to the input of the cross-layer connection path in the sum layer, and the result is used as the input of the next layer.
S314、步骤S31中所述Res_inception_block3具体结构如图7所示,第一条连接路径:输入->卷积->concat层。第二条连接路径:输入->卷积->S2N->relu->卷积->S2N->relu->卷积。然后将上述两条路径经过concat层拼接之后的输出与跨层连接路径的输入在sum层下相加,其结果作为下一层的输入。S314. The specific structure of Res_inception_block3 described in step S31 is shown in FIG. 7, the first connection path: input->convolution->concat layer. The second connection path: input->convolution->S2N->relu->convolution->S2N->relu->convolution. Then, the output of the above two paths spliced by the concat layer and the input of the cross-layer connection path are added under the sum layer, and the result is used as the input of the next layer.
稀疏角度投影重建图像伪影消除神经网络的卷积方式采用的是边界扩充型卷积,使得每一层的输入与输出尺寸相同。Sparse Angle Projection Reconstruction Image Artifact Removal The neural network convolution method adopts boundary expansion convolution, so that the input and output dimensions of each layer are the same.
S4、利用步骤S2中保存的数据集Bi_low(i=1,2,3,4,5)作为稀疏角度投影重建图像伪影消除神经网络的训练数据集,数据集Bi_high(i=1,2,3,4,5)做对照数据,对步骤S3中构建的稀疏角度投影重建图像伪影消除神经网络进行训练,保存训练效果较好的模型,记为model。S4, using the data set Bi_low (i=1,2,3,4,5) preserved in step S2 as the training data set of the sparse angle projection reconstruction image artifact elimination neural network, the data set Bi_high (i=1,2, 3, 4, 5) As the control data, train the neural network for image artifact elimination for sparse angle projection reconstruction constructed in step S3, save the model with better training effect, and record it as model.
S41、对步骤S2中所得数据Bi(i=1,2,3,4,5)进行分块化处理,分块化即将每张图像分为若干128*128的小块数据集patchi(i=1,2,3,4,5)再投入稀疏角度投影重建图像伪影消除神经网络,以便学习。S41. Carry out block processing to the data Bi (i=1,2,3,4,5) obtained in step S2, and the block processing is about to divide each image into several 128*128 small block data sets patchi (i= 1, 2, 3, 4, 5) and then put into the sparse angle projection reconstructed image artifact removal neural network for learning.
S42、在将数据patchi(i=1,2,3,4,5)投入之前需要对数据进行Z-score归一化处理,即其中mean为patchi的均值,std为其标准差;将patchi*投入稀疏角度投影重建图像伪影消除神经网络进行训练,并保存训练效果较好的模型记为model。S42. Before inputting the data patchi (i=1,2,3,4,5), it is necessary to perform Z-score normalization processing on the data, namely Where mean is the mean value of patchi, and std is its standard deviation; put patchi* into the sparse angle projection reconstruction image artifact elimination neural network for training, and save the model with better training effect as model.
步骤S42中,稀疏角度投影重建图像伪影消除神经网络训练过程中具体是采取均方误差mse作为训练最后的损失函数来对网络参数进行校正,表达式如下:In step S42, during the sparse angle projection reconstruction image artifact elimination neural network training process, the mean square error mse is used as the final loss function of the training to correct the network parameters, and the expression is as follows:
其中,n是每一批投入数据量Batch-size的大小,w、h分别是每个样本的长和宽,D是减去稀疏角度投影重建图像伪影消除神经网络预测伪影以及噪声之后的效果图clean_img,X是对比数据,即步骤S2中所重建的稀疏角度投影重建数据集与完备角度投影重建数据集的差值图像;训练过程中,稀疏角度投影重建图像伪影消除神经网络根据损失函数的数值,借助反向传播算法和Adam优化算法逐层计算更新量来逐次更新网络的参数以优化模型model。Among them, n is the Batch-size of each batch of input data, w and h are the length and width of each sample respectively, and D is the image after subtracting sparse angle projection reconstruction image artifacts to eliminate neural network prediction artifacts and noise The effect image clean_img, X is the comparison data, that is, the difference image between the sparse angle projection reconstruction data set and the complete angle projection reconstruction data set reconstructed in step S2; during the training process, the sparse angle projection reconstruction image artifacts are eliminated by the neural network according to the loss The value of the function, using the backpropagation algorithm and the Adam optimization algorithm to calculate the update amount layer by layer to update the parameters of the network successively to optimize the model model.
S5、获取步骤S4中训练好的model,将需要进行伪影去除处理的图片作为测试图像test_img,投入model中,最终预测且保存该test_img和对应的伪影图noise_img。S5. Obtain the model trained in step S4, put the image that needs to be processed for artifact removal as the test image test_img, put it into the model, and finally predict and save the test_img and the corresponding artifact image noise_img.
步骤S5中具体步骤是先获取步骤S42中保存的模型model,将需要进行去伪影处理的图片test_img投入model之中进行伪影图预测,并保存预测的伪影图noise_img。The specific steps in step S5 are to first obtain the model model saved in step S42, put the image test_img that needs to be de-artifacted into the model to predict the artifact map, and save the predicted artifact map noise_img.
S6、利用步骤E中保存的测试图像test_img减去伪影图noise_img,即可得到清晰图像clean_img。S6. Using the test image test_img saved in step E to subtract the artifact image noise_img, a clear image clean_img can be obtained.
S61、获取所述步骤E中预测的noise_img,以及其使用的test_img。S61. Obtain the noise_img predicted in step E and the test_img used by it.
S62、用测试图test_img减去其对应的伪影图noise_img,即可得到所需的清晰效果图clean_img。S62. Subtract the corresponding artifact image noise_img from the test image test_img to obtain the desired clear effect image clean_img.
实施例所构建的稀疏角度投影重建图像伪影消除神经网络中利用到了诸多先进神经网络模型的思想,诸如resnet,Inception-resnet,FCN等等。具体结构如配图2所示;The idea of many advanced neural network models, such as resnet, Inception-resnet, FCN, etc., is used in the sparse angle projection reconstructed image artifact elimination neural network constructed in the embodiment. The specific structure is shown in Figure 2;
实施例中稀疏角度投影重建图像伪影消除神经网络受resnet的思想启发,采用“短连接”的技巧,如图2右半边所示,缓解了传统深度网络所出现的梯度弥散、梯度爆炸等情况。梯度弥散是指,常规的网络结构堆叠在网络层数增加到一定程度之后,稀疏角度投影重建图像伪影消除神经网络的性能不仅不会变好,而且可能会更差。梯度弥散是深度神经网络中较为常见的一个现象,如何在加深网络,提高网络的表达能力的同时,规避梯度弥散现象是深度学习中的一个热点。跨层连接直接将两个不相邻的隐藏层相互连接,这样在网络很深的情况下,即便在某些层出现问题,但是由于跨层连接的存在,使得前面的隐藏层结果可以通过短连接跨层输入到后面,并不会过于影响整体的效果。在Inception-resnet基本模块中,吸收了resnet的这一优点。In the embodiment, the sparse angle projection reconstruction image artifact elimination neural network is inspired by the idea of resnet, and adopts the "short connection" technique, as shown in the right half of Figure 2, which alleviates the gradient dispersion and gradient explosion that occur in traditional deep networks. . Gradient dispersion means that after the conventional network structure is stacked and the number of network layers increases to a certain level, the performance of sparse angle projection reconstruction image artifact removal neural network will not only not get better, but may even get worse. Gradient dispersion is a relatively common phenomenon in deep neural networks. How to deepen the network and improve the expressive ability of the network while avoiding the phenomenon of gradient dispersion is a hot spot in deep learning. The cross-layer connection directly connects two non-adjacent hidden layers to each other, so that when the network is very deep, even if there is a problem in some layers, due to the existence of the cross-layer connection, the results of the previous hidden layer can be passed through a short Connecting cross-layer input to the back will not affect the overall effect too much. In the Inception-resnet basic module, this advantage of resnet is absorbed.
Inception的特点是多种卷积核组合方式的并行,也就是较“宽”。以往的普通网络,用来提取不同特征之时,只会用相同组合方式的若干数量卷积核,而Inception结构是用几个不同组合方式的若干卷积核来进行对不同特征的提取,这种结构对于稀疏角度CT图像中结构性伪影与其他噪声特征相差较大的特点有较好的契合。Inception is characterized by the parallelism of multiple convolution kernel combinations, that is, it is "wider". In the past, when ordinary networks were used to extract different features, they only used a number of convolution kernels in the same combination, while the Inception structure used several convolution kernels in different combinations to extract different features. This kind of structure has a good fit for the characteristics of the large difference between the structural artifact and other noise features in the sparse angle CT image.
Inception-resnet基本模块结合了以上resnet以及Inception,如图2所示,可以发挥二者的优点。在整体稀疏角度投影重建图像伪影消除神经网络结构中,使用多个Inception-resnet模块组合,可以构建更深的网络使得整体的表达能力提升很多,并且由于Inception的存在,对于不同特征的提取的方式更加有效,而实施例所研究之问题,稀疏角度伪影与噪声混杂,即要预测的特征高中低层次均有,非常契合以Inception-resnet基本模块构建的网络。如表1所示,使用几种不同的网络来进行训练及测试,从上到下依次为单纯的残差网络,单纯堆叠的卷积网络,以及实施例中稀疏角度投影重建图像伪影消除神经网络,9层是指其使用的基本block的层数,例如Resblock,Inception block。使用psnr和ssim做评估指标。The Inception-resnet basic module combines the above resnet and Inception, as shown in Figure 2, which can take advantage of both. In the overall sparse angle projection reconstruction image artifact elimination neural network structure, using multiple Inception-resnet module combinations can build a deeper network to improve the overall expression ability a lot, and due to the existence of Inception, the way of extracting different features It is more effective, and the problem studied in the embodiment is that sparse angle artifacts and noise are mixed, that is, the features to be predicted have both high, medium and low levels, which is very suitable for the network built with the basic module of Inception-resnet. As shown in Table 1, several different networks are used for training and testing, from top to bottom are pure residual network, simple stacked convolutional network, and sparse angle projection reconstruction image artifact removal neural network in the embodiment. Network, 9 layers refers to the number of layers of the basic block used, such as Resblock, Inception block. Use psnr and ssim as evaluation indicators.
稀疏角度投影重建图像伪影消除神经网络整体结构上受FCN全卷积神经网络的启发,其好处就是用卷积代替全连接,其局部连接,参数共享的特点,使得训练的参数大大减少,降低了实验所需硬件要求以及在某些层面降低了过拟合的可能性。全卷积网络因为取消了全连接,所以其对于输入的尺寸也没有硬性要求,更适合于图像处理领域。The overall structure of the sparse angle projection reconstruction image artifact elimination neural network is inspired by the FCN full convolutional neural network. Its advantage is that it uses convolution instead of full connection. Its local connection and parameter sharing characteristics greatly reduce the training parameters. It reduces the hardware requirements for experiments and reduces the possibility of overfitting at some levels. Because the full convolutional network cancels the full connection, it has no hard requirements for the input size, and is more suitable for the field of image processing.
稀疏角度投影重建图像伪影消除神经网络的卷积方式采用的是边界扩充型卷积,如图8所示。使得每一层的输入与输出尺寸相同。利用稀疏角度数据重建图像与完备角度数据重建图像做训练数据,使得稀疏角度投影重建图像伪影消除神经网络能够较为准确的预测出稀疏角度图像中的伪影以及其他噪声。The convolution method of sparse angle projection reconstructed image artifact removal neural network adopts boundary expansion convolution, as shown in Figure 8. Make the input and output dimensions of each layer the same. Using sparse angle data reconstruction images and complete angle data reconstruction images as training data, the sparse angle projection reconstruction image artifact elimination neural network can more accurately predict artifacts and other noises in sparse angle images.
深度学习去伪影的整体思想可分为两种,一种是直接从带有伪影和噪声的图像直接映射到原始图像,另一种是从带有噪声与伪影的图像映射到其中的噪声和伪影,然后再利用带有噪声与伪影的图减去预测出的噪声与伪影得到图像的原始信息。而根据相关研究,第二种去伪影的方式,其网络拓扑结构比第一种更简单,也即伪影的噪声比正常组织结构更容易被学习到,故而本方法采用后者;The overall idea of deep learning to remove artifacts can be divided into two types, one is to directly map from an image with artifacts and noise to the original image, and the other is to map from an image with noise and artifacts to it. Noise and artifacts, and then use the image with noise and artifacts to subtract the predicted noise and artifacts to obtain the original information of the image. According to related research, the second method of removing artifacts has a simpler network topology than the first one, that is, the noise of artifacts is easier to learn than the normal organizational structure, so this method adopts the latter;
实施例中稀疏角度投影重建图像伪影消除神经网络采取mse作为最后的损失函数,表达式如下:In the embodiment, the sparse angle projection reconstruction image artifact elimination neural network adopts mse as the final loss function, and the expression is as follows:
其中,n是每一批投入数据量Batch-size的大小,w、h分别是每个样本的长和宽,D是减去稀疏角度投影重建图像伪影消除神经网络预测伪影以及噪声之后的效果图,X是对比图,即相对应的完备角度图像。训练过程中,稀疏角度投影重建图像伪影消除神经网络根据损失函数的数值,根据反向传播算法(Back Propagation,BP)和Adam优化算法逐层计算更新量来更新权值和偏置;Among them, n is the Batch-size of each batch of input data, w and h are the length and width of each sample respectively, and D is the image after subtracting sparse angle projection reconstruction image artifacts to eliminate neural network prediction artifacts and noise Effect picture, X is the comparison picture, that is, the corresponding complete angle image. During the training process, the sparse angle projection reconstruction image artifact elimination neural network calculates the update amount layer by layer according to the value of the loss function, according to the back propagation algorithm (Back Propagation, BP) and the Adam optimization algorithm to update the weights and biases;
稀疏角度投影重建图像伪影消除神经网络各参数设置:迭代次数为150次,批次大小为48,初始学习率为0.001,每30次迭代衰减为自身百分之六十,损失函数采用mse,优化器为Adam以此创建稀疏角度投影重建图像伪影消除神经网络进行训练与测试。最终90角度投影数据重建图像的测试psnr达到36.5915,ssim达到0.8818,如表1所示。Sparse angle projection reconstruction image artifact elimination neural network parameter settings: the number of iterations is 150, the batch size is 48, the initial learning rate is 0.001, every 30 iterations decays to 60% of itself, the loss function uses mse, The optimizer uses Adam to create a sparse angle projection reconstruction image artifact removal neural network for training and testing. The test psnr of the reconstructed image from the final 90-angle projection data reaches 36.5915, and the ssim reaches 0.8818, as shown in Table 1.
表1实施例与不同的网络的训练及测试结果Table 1 embodiment and training and test results of different networks
以上对本方法的具体实施做出了解释,当然本方法还可有其他多种具体实施方式,熟悉本技术领域的人员均可在不违背本发明精神的前提下做出各种改变与变形,但这些改变与变形应当包含于本申请专利所要求限定的保护范围内。The specific implementation of this method has been explained above, certainly this method also can have other multiple specific implementation modes, and those who are familiar with this technical field can make various changes and distortions under the premise of not violating the spirit of the present invention, but These changes and deformations should be included in the scope of protection defined by the patent requirements of this application.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811137448.2A CN109559359B (en) | 2018-09-27 | 2018-09-27 | Artifact removal method for reconstructed image from sparse angle data based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811137448.2A CN109559359B (en) | 2018-09-27 | 2018-09-27 | Artifact removal method for reconstructed image from sparse angle data based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109559359A CN109559359A (en) | 2019-04-02 |
CN109559359B true CN109559359B (en) | 2023-06-02 |
Family
ID=65864849
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811137448.2A Active CN109559359B (en) | 2018-09-27 | 2018-09-27 | Artifact removal method for reconstructed image from sparse angle data based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109559359B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211194A (en) * | 2019-05-21 | 2019-09-06 | 武汉理工大学 | A method of sparse angular CT imaging artefacts are removed based on deep learning |
CN110503699A (en) * | 2019-07-01 | 2019-11-26 | 天津大学 | A CT image reconstruction method with reduced CT projection path |
EP3764325B1 (en) * | 2019-07-12 | 2024-05-15 | Canon Medical Systems Corporation | System and program for avoiding focal spot blurring |
CN110477951B (en) * | 2019-08-30 | 2020-08-25 | 浙江大学 | Ultra-fast composite plane wave imaging method based on broadband acoustic metamaterial |
CN111009019B (en) * | 2019-09-27 | 2021-07-16 | 北京航空航天大学 | Incomplete data reconstruction method of differential phase contrast CT based on deep learning |
CN110751701B (en) * | 2019-10-18 | 2021-03-30 | 北京航空航天大学 | X-ray absorption contrast computed tomography incomplete data reconstruction method based on deep learning |
CN111028160A (en) * | 2019-11-21 | 2020-04-17 | 西北工业大学 | A Noise Suppression Method for Remote Sensing Image Based on Convolutional Neural Network |
CN110849462B (en) * | 2019-12-05 | 2021-07-27 | 武汉科技大学 | A Method for Separating Vibration Signals of Tandem Rolling Mill Based on Sparse Feature Similarity |
CN111223161B (en) * | 2020-01-02 | 2024-04-12 | 京东科技控股股份有限公司 | Image reconstruction method, device and storage medium |
CN112381741B (en) * | 2020-11-24 | 2021-07-16 | 佛山读图科技有限公司 | A Tomographic Image Reconstruction Method Based on SPECT Data Sampling and Noise Characteristics |
CN112381904B (en) * | 2020-11-26 | 2024-05-24 | 南京医科大学 | A Limited Angle CT Image Reconstruction Method Based on DTw-SART-TV Iterative Process |
CN112348936B (en) * | 2020-11-30 | 2023-04-18 | 华中科技大学 | Low-dose cone-beam CT image reconstruction method based on deep learning |
CN113052936B (en) * | 2021-03-30 | 2024-11-15 | 大连理工大学 | Single-view CT reconstruction method integrating FDK and deep learning |
CN113256497B (en) * | 2021-06-21 | 2021-09-24 | 中南大学 | Image reconstruction method and system |
CN113505536A (en) * | 2021-07-09 | 2021-10-15 | 兰州理工大学 | Optimized traffic flow prediction model based on space-time diagram convolution network |
CN114140442B (en) * | 2021-12-01 | 2024-12-03 | 北京邮电大学 | A deep learning sparse angular CT reconstruction method based on frequency domain and image domain degradation perception |
CN114549680B (en) * | 2022-02-17 | 2025-06-06 | 南京大学 | A method for eliminating truncation artifacts in limited-viewing-angle tomography |
CN115049753B (en) * | 2022-05-13 | 2024-05-10 | 沈阳铸造研究所有限公司 | Cone beam CT artifact correction method based on unsupervised deep learning |
CN114723842B (en) * | 2022-05-24 | 2022-08-23 | 之江实验室 | Sparse visual angle CT imaging method and device based on depth fusion neural network |
CN115239836B (en) * | 2022-07-25 | 2025-06-06 | 广东工业大学 | An extremely sparse view CT reconstruction method based on end-to-end neural network |
CN118300729B (en) * | 2024-06-05 | 2024-08-20 | 中国海洋大学 | Sound field rapid forecasting method based on depth bilateral sparse sampling |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10685429B2 (en) * | 2017-02-22 | 2020-06-16 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN107871332A (en) * | 2017-11-09 | 2018-04-03 | 南京邮电大学 | A Method and System for Correcting Artifacts in CT Sparse Reconstruction Based on Residual Learning |
CN108122265A (en) * | 2017-11-13 | 2018-06-05 | 深圳先进技术研究院 | A kind of CT reconstruction images optimization method and system |
-
2018
- 2018-09-27 CN CN201811137448.2A patent/CN109559359B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109559359A (en) | 2019-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109559359B (en) | Artifact removal method for reconstructed image from sparse angle data based on deep learning | |
CN112348936B (en) | Low-dose cone-beam CT image reconstruction method based on deep learning | |
CN109102550B (en) | Full-network low-dose CT imaging method and device based on convolution residual error network | |
CN109949235A (en) | A denoising method for chest X-ray film based on deep convolutional neural network | |
WO2021232653A1 (en) | Pet image reconstruction algorithm combining filtered back-projection algorithm and neural network | |
CN110648376B (en) | Limited angle CT reconstruction artifact removing method based on generation countermeasure network | |
Jiang et al. | Low-dose CT lung images denoising based on multiscale parallel convolution neural network | |
CN107871332A (en) | A Method and System for Correcting Artifacts in CT Sparse Reconstruction Based on Residual Learning | |
CN112396672A (en) | Sparse angle cone-beam CT image reconstruction method based on deep learning | |
CN108416821A (en) | A CT image super-resolution reconstruction method based on deep neural network | |
CN104657950B (en) | Dynamic PET (positron emission tomography) image reconstruction method based on Poisson TV | |
CN103810734B (en) | A kind of low dose X-ray CT data for projection restoration methods | |
CN110934586B (en) | Regularization method for fast decomposition and reconstruction of gray value matrix | |
CN110009613A (en) | Low-dose CT imaging method, device and system based on deep dense network | |
CN107330953A (en) | A kind of Dynamic MRI method for reconstructing based on non-convex low-rank | |
CN115919330A (en) | EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution | |
CN108596995A (en) | A kind of PET-MRI maximum a posteriori joint method for reconstructing | |
CN116152373A (en) | A Low-Dose CT Image Reconstruction Method Combining Neural Network and Convolutional Dictionary Learning | |
Xie et al. | Contextual loss based artifact removal method on CBCT image | |
CN114494498A (en) | Metal artifact removing method based on double-domain Fourier neural network | |
Chan et al. | An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction | |
CN116630738A (en) | Energy spectrum CT imaging method based on depth convolution sparse representation reconstruction network | |
CN102013108A (en) | Regional spatial-temporal prior-based dynamic PET reconstruction method | |
CN116596785A (en) | Texture-preserving low-dose CT image denoising based on texture feature guidance | |
CN116167929A (en) | Low-dose CT image denoising network based on residual multi-scale feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |