[go: up one dir, main page]

CN117475018A - A CT motion artifact removal method - Google Patents

A CT motion artifact removal method Download PDF

Info

Publication number
CN117475018A
CN117475018A CN202311417580.XA CN202311417580A CN117475018A CN 117475018 A CN117475018 A CN 117475018A CN 202311417580 A CN202311417580 A CN 202311417580A CN 117475018 A CN117475018 A CN 117475018A
Authority
CN
China
Prior art keywords
image
motion
network
kernel
motion artifacts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311417580.XA
Other languages
Chinese (zh)
Inventor
靖稳峰
张雪松
刘盼盼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202311417580.XA priority Critical patent/CN117475018A/en
Publication of CN117475018A publication Critical patent/CN117475018A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20201Motion blur correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明公开了一种CT运动伪影去除方法,具体包括如下步骤:步骤1,采集带运动伪影的CT图像;步骤2,筛选出清晰CT影像切片;步骤3,向步骤2中筛选出的清晰CT影像加入人工仿真的运动过程,进而得到带有运动伪影的CT影像切片,获得配对CT影像数据集;步骤4,使用步骤3得到的配对CT影像数据集训练基于核的UNet网络模型;步骤5,使用带运动伪影的CT影像数据来测试步骤4训练好的基于核的UNet网络模型,输出清晰的CT影像数据。采用本发明提供的方法能够去除CT影像中刚性、非刚性运动所造成的多样化、非均匀化的运动伪影。

The invention discloses a method for removing CT motion artifacts, which specifically includes the following steps: Step 1, collect CT images with motion artifacts; Step 2, screen out clear CT image slices; Step 3, filter out the CT image slices screened out in Step 2. Artificially simulated motion processes are added to the clear CT images to obtain CT image slices with motion artifacts, and a paired CT image data set is obtained; Step 4, use the paired CT image data set obtained in Step 3 to train the kernel-based UNet network model; Step 5: Use CT image data with motion artifacts to test the kernel-based UNet network model trained in step 4, and output clear CT image data. The method provided by the invention can be used to remove diverse and non-uniform motion artifacts caused by rigid and non-rigid motion in CT images.

Description

一种CT运动伪影去除方法A CT motion artifact removal method

技术领域Technical field

本发明属于CT伪影校正技术领域,涉及一种CT运动伪影去除方法。The invention belongs to the technical field of CT artifact correction and relates to a method for removing CT motion artifacts.

背景技术Background technique

在医学影像运动伪影去除领域,目前主要有三种解决方案。第一种是缩短扫描时间,使扫描过程中病人运动的可能性减少;第二种是通过外部运动跟踪设备,以此获得病人运动的参数轨迹,从而可以重建出无运动伪影的影像;第三种是通过运动估计和运动补偿,实现运动伪影校正。In the field of motion artifact removal from medical images, there are currently three main solutions. The first is to shorten the scanning time to reduce the possibility of patient movement during the scanning process; the second is to use external motion tracking equipment to obtain the parameter trajectory of the patient's movement, so that images without motion artifacts can be reconstructed; The third method is to achieve motion artifact correction through motion estimation and motion compensation.

缩短扫描时间的方法主要是以增加X射线源的扫描速度或使用多个射线源和探测器来提高时间分辨率。尽管CT成像的扫描时间很短,但病人运动引起的CT成像伪影依然经常存在,只能通过重复扫描修复。运动数据跟踪方法是通过外部运动跟踪设备获取病人运动的轨迹,并利用该运动数据恢复投影的一致性,最后重建图像,实现运动伪影校正,这种方法硬件成本高,推广使用困难。The main methods to shorten the scanning time are to increase the scanning speed of the X-ray source or to use multiple ray sources and detectors to improve the time resolution. Although the scanning time of CT imaging is short, CT imaging artifacts caused by patient movement still often exist and can only be repaired through repeated scanning. The motion data tracking method is to obtain the patient's movement trajectory through an external motion tracking device, and use the motion data to restore the consistency of the projection, and finally reconstruct the image to achieve motion artifact correction. This method has high hardware costs and is difficult to promote and use.

运动补偿技术是在外部设备门控技术的基础上,通过模糊恢复算法对运动进行补偿的一种模糊恢复技术,主要分为两种:一是基于图像配准的方法,二是基于图像重建或者原始数据的恢复方法。其中基于配准的恢复方法是先将各个呼吸相的图像分别重建,然后通过配准算法将其他呼吸相的图像配准到一个参考呼吸相上,最后将各呼吸相的图像相加获得图像的模糊恢复方法。对于器官运动复杂的胸腹部,标记物不能准确反映体内器官的运动状态,且即使采用弹性变换来优化配准技术,仍会存在局部配准误差,因而这种基于配准的方法在这些部位的恢复效果不够好。而基于图像重建的方法是在成像系统模型中加入运动信息,或者是对采集到的数据进行重组的呼吸运动恢复方法。相比于基于配准方法,使用迭代重建算法可以加入各种先验约束条件,基于图像重建或者原始数据的恢复方法可以提高图像的信噪比和对比度。Motion compensation technology is a fuzzy recovery technology that compensates motion through a fuzzy recovery algorithm based on external device gating technology. It is mainly divided into two types: one is based on image registration, and the other is based on image reconstruction or Original data recovery methods. The restoration method based on registration is to first reconstruct the images of each respiratory phase separately, then register the images of other respiratory phases to a reference respiratory phase through the registration algorithm, and finally add the images of each respiratory phase to obtain the image Fuzzy recovery method. For the chest and abdomen with complex organ movements, markers cannot accurately reflect the movement status of organs in the body, and even if elastic transformation is used to optimize the registration technology, there will still be local registration errors. Therefore, this registration-based method is not suitable for these parts. The recovery effect is not good enough. The method based on image reconstruction adds motion information to the imaging system model, or is a respiratory motion recovery method that reorganizes the collected data. Compared with registration-based methods, the use of iterative reconstruction algorithms can add various a priori constraints, and image reconstruction or original data recovery methods can improve the signal-to-noise ratio and contrast of the image.

发明内容Contents of the invention

本发明的目的是提供一种CT运动伪影去除方法,直接应用于常规(非门控)扫描条件下重建得到的医学影像,可以减轻CT影像中刚性、非刚性运动所造成的多样化、非均匀化的运动伪影,得到较好的图像恢复效果,且相较于图像重建算法,运行速度快,节省了大量的计算资源。The purpose of the present invention is to provide a method for removing CT motion artifacts, which can be directly applied to reconstructed medical images under conventional (non-gated) scanning conditions, which can reduce the diverse and non-rigid motion artifacts caused by rigid and non-rigid motion in CT images. Uniform motion artifacts achieve better image restoration effects, and compared with image reconstruction algorithms, they run faster and save a lot of computing resources.

本发明所采用的技术方案是,一种CT运动伪影去除方法,具体包括如下步骤:The technical solution adopted by the present invention is a CT motion artifact removal method, which specifically includes the following steps:

步骤1,采集带运动伪影的CT图像;Step 1, collect CT images with motion artifacts;

步骤2,筛选出清晰CT影像切片;Step 2: Screen out clear CT image slices;

步骤3,向步骤2中筛选出的清晰CT影像加入人工仿真的运动过程,进而得到带有运动伪影的CT影像切片,获得配对CT影像数据集;Step 3: Add artificial simulated motion processes to the clear CT images screened in step 2 to obtain CT image slices with motion artifacts and obtain paired CT image data sets;

步骤4,使用步骤3得到的配对CT影像数据集训练基于核的UNet网络模型;Step 4: Use the paired CT image data set obtained in step 3 to train the kernel-based UNet network model;

步骤5,使用带运动伪影的CT影像数据来测试步骤4训练好的基于核的UNet网络模型,输出清晰的CT影像数据。Step 5: Use CT image data with motion artifacts to test the kernel-based UNet network model trained in step 4, and output clear CT image data.

本发明的特点还在于:The invention is also characterized by:

步骤1具体过程为:The specific process of step 1 is:

步骤1.1,使用CT仪器扫描人体或体模,得到某一区域螺旋扫描的相应的放射线衰减信号;Step 1.1, use a CT instrument to scan the human body or phantom to obtain the corresponding radiation attenuation signal of a certain area of spiral scanning;

步骤1.2,将步骤1.1得到的放射线衰减信号转换为相关扫描区域中各点位360度探测信号,输出为弦图域图像;Step 1.2, convert the radiation attenuation signal obtained in step 1.1 into a 360-degree detection signal at each point in the relevant scanning area, and output it as a chord diagram domain image;

步骤1.3,将步骤1.2得到的弦图域图像通过CT影像重建算法重建为CT影像数据。Step 1.3: Reconstruct the chordmap domain image obtained in step 1.2 into CT image data through the CT image reconstruction algorithm.

步骤2的具体过程为:The specific process of step 2 is:

步骤2.1,通过人工观察逐个去除每个CT影像序列中头部、腹部CT影像切片;Step 2.1, remove the head and abdomen CT image slices in each CT image sequence one by one through manual observation;

步骤2.2,剔除包含绝对值大于3000体素的CT影像切片;Step 2.2, eliminate CT image slices containing absolute values greater than 3000 voxels;

步骤2.3,通过人工观察逐个去除每个CT影像序列中含有金属伪影的CT影像切片;Step 2.3, remove CT image slices containing metal artifacts in each CT image sequence one by one through manual observation;

步骤2.4,判别每张CT影像切片中各区域的组织结构,并剔除模糊、带有运动伪影的CT影像切片,进而构造清晰的CT影像数据集。Step 2.4, identify the tissue structure of each area in each CT image slice, and eliminate blurred CT image slices with motion artifacts, thereby constructing a clear CT image data set.

步骤3具体过程为:The specific process of step 3 is:

步骤3.1,生成运动严重程度矩阵;Step 3.1, generate motion severity matrix;

步骤3.2,基于步骤1生成的运动严重程度矩阵生成形变网格mask;Step 3.2, generate a deformation mesh mask based on the motion severity matrix generated in step 1;

步骤3.3,使用ASTRA重建算法加入人工运动仿真过程,得到带有弹性形变运动伪影的数据。Step 3.3: Use the ASTRA reconstruction algorithm to add the artificial motion simulation process to obtain data with elastic deformation motion artifacts.

步骤3.1的具体过程为:The specific process of step 3.1 is:

步骤3.1.1,生成随机高斯矩阵来模拟物体或组织运动状态,矩阵大小为M*M,中心在原点,x、y轴对应两个矩阵;Step 3.1.1, generate a random Gaussian matrix to simulate the motion state of the object or tissue. The matrix size is M*M, the center is at the origin, and the x and y axes correspond to the two matrices;

步骤3.1.2,生成并初始化位移矩阵displacement,矩阵大小为N,其中N<M,x,y轴对应两个矩阵;Step 3.1.2, generate and initialize the displacement matrix, the matrix size is N, where N<M, the x and y axes correspond to two matrices;

步骤3.1.3,随机产生位移矩阵displacement的中心坐标(Cx,Cy);Step 3.1.3, randomly generate the center coordinates (C x ,C y ) of the displacement matrix;

步骤3.1.4,给位移矩阵displacement赋值,该矩阵为高斯矩阵中坐标(Cx,Cy)所选取的子矩阵,矩阵大小为N*N;Step 3.1.4, assign a value to the displacement matrix, which is the submatrix selected by the coordinates (C x , C y ) in the Gaussian matrix. The matrix size is N*N;

步骤3.1.5,对位移矩阵displacement乘以放缩系数(Dirx,Diry)得到运动严重程度矩阵,x,y轴对应两个矩阵;Step 3.1.5: Multiply the displacement matrix by the scaling coefficient (Dir x , Dir y ) to obtain the motion severity matrix. The x and y axes correspond to the two matrices;

步骤3.1.6,将运动严重程度矩阵的数值绝对值范围裁切到max(Dirx,Diry)范围内。Step 3.1.6: Cut the numerical absolute value range of the motion severity matrix to the range of max(Dir x ,Dir y ).

步骤3.2的具体过程为:The specific process of step 3.2 is:

步骤3.2.1,将运动严重程度矩阵加入掩膜mask,用于描述CT影像切片中各体素的运动状况;Step 3.2.1, add the motion severity matrix to the mask to describe the motion status of each voxel in the CT image slice;

步骤3.2.2,使用样条插值对形变后的图像进行插值,得到最终的运动掩膜mask。Step 3.2.2, use spline interpolation to interpolate the deformed image to obtain the final motion mask.

步骤3.3的具体过程为:The specific process of step 3.3 is:

步骤3.3.1,构建虚拟探测器;Step 3.3.1, build virtual detector;

步骤3.3.2,通过步骤3.3.1构建的虚拟探测器模拟CT采集过程;Step 3.3.2, simulate the CT acquisition process through the virtual detector constructed in step 3.3.1;

步骤3.3.3,通过CT扫描得到相应的弦图域数据;Step 3.3.3, obtain the corresponding chord image domain data through CT scanning;

步骤3.3.4,通过CT图像重建算法得到图像域数据,即各点位对应的CT影像切片数据。Step 3.3.4: Obtain the image domain data through the CT image reconstruction algorithm, that is, the CT image slice data corresponding to each point.

步骤4具体过程为:The specific process of step 4 is:

步骤4.1,通过贝叶斯方法推导计算出核kernel和清晰图像的后验分布变分推断;Step 4.1, calculate the variational inference of the posterior distribution of the kernel and clear image through Bayesian method derivation;

步骤4.2,推导计算证据下界,进而得到深度学习训练所对应的损失函数;Step 4.2, derive and calculate the evidence lower bound, and then obtain the loss function corresponding to deep learning training;

步骤4.3,构建基于核的UNet网络模型,并使用步骤2得到的带运动伪影CT影像配对数据集对构建的基于核的UNet网络模型进行训练。Step 4.3: Construct a kernel-based UNet network model, and use the paired data set of CT images with motion artifacts obtained in step 2 to train the constructed kernel-based UNet network model.

步骤4.3的具体过程为:The specific process of step 4.3 is:

步骤4.3.1,构建核网络KNet,核网络KNet使用深度学习网络作为基准网络构建,首先通过基准网络来提取带运动伪影CT影像切片的初步特征,通过线性全连接层进一步将该初步特征处理变换为一维特征,再将该一维特征通过softmax()函数变换得到元素值之和等于1的特征,最后将该特征转换为二维特征及代表该带运动伪影CT影像切片所对应的模糊核kernel;Step 4.3.1, build the kernel network KNet. The kernel network KNet is constructed using the deep learning network as the baseline network. First, the baseline network is used to extract the preliminary features of the CT image slices with motion artifacts, and the preliminary features are further processed through the linear fully connected layer. Transform it into a one-dimensional feature, and then transform the one-dimensional feature through the softmax() function to obtain a feature whose sum of element values is equal to 1. Finally, convert the feature into a two-dimensional feature and represent the corresponding CT image slice with motion artifacts. Fuzzy kernel;

步骤4.3.2,构建去模糊网络DNet,去模糊网络DNet使用用于图像恢复的深度学习网络作为基准网络构建Step 4.3.2, construct the deblurring network DNet. The deblurring network DNet is constructed using the deep learning network for image restoration as the baseline network.

步骤步骤4.3.3,使用步骤3得到的配对CT影像数据作为训练数据,使用步骤4.2给出的损失函数训练核网络KNet和去模糊网络DNet。Step 4.3.3, use the paired CT image data obtained in step 3 as training data, and use the loss function given in step 4.2 to train the kernel network KNet and the deblurring network DNet.

步骤5的具体过程为:The specific process of step 5 is:

步骤5.1,选取步骤3中未用于模型训练的配对CT影像数据集作为测试的CT影像数据集;Step 5.1, select the paired CT image data set that was not used for model training in step 3 as the test CT image data set;

步骤5.2,通过基于核的UNet网络模型中的核网络KNet估计出带运动伪影CT影像切片对应的核kernel;Step 5.2, estimate the kernel kernel corresponding to the CT image slice with motion artifacts through the kernel network KNet in the kernel-based UNet network model;

步骤5.3,根据带运动伪影CT影像切片和核网络KNet估计出的核kernel,通过去模糊网络综合提取并最终恢复出无伪影的或组织结构清晰的去运动伪影CT图像。Step 5.3: Based on the CT image slices with motion artifacts and the kernel kernel estimated by the kernel network KNet, a motion artifact-free CT image without artifacts or with clear tissue structure is comprehensively extracted and finally restored through the deblurring network.

本发明的有益效果如下:The beneficial effects of the present invention are as follows:

(1)本发明提供的运动伪影仿真方法,可以得到较逼真的运动伪影、运动模糊效果,可实现运动伪影特征多样化、非均匀化特征,进而有利于解决真实CT影像数据运动伪影去除问题;(1) The motion artifact simulation method provided by the present invention can obtain more realistic motion artifacts and motion blur effects, and can achieve diversified and non-uniform features of motion artifacts, thereby helping to solve the problem of motion artifacts in real CT image data. Shadow removal problem;

(2)本发明提供的运动伪影仿真方法,无需基于呼吸运动、心脏运动等生理机理,可以高效地仿真出运动伪影、运动模糊特征,进而构建配对的CT影像数据集,解决CT影像数据难获取、带运动伪影CT影像配对数据稀缺的问题;(2) The motion artifact simulation method provided by the present invention does not need to be based on physiological mechanisms such as respiratory motion and cardiac motion. It can efficiently simulate motion artifacts and motion blur characteristics, and then construct a paired CT image data set to solve the problem of CT image data Problems such as difficulty in obtaining and scarcity of paired data of CT images with motion artifacts;

(3)本发明提供的基于核的UNet网络(KBUNet),充分提取运动模糊信息,相比其他方法,可以显著提高CT影像去运动伪影效果;(3) The kernel-based UNet network (KBUNet) provided by the present invention fully extracts motion blur information and can significantly improve the motion artifact removal effect of CT images compared with other methods;

(4)本发明提供的基于核的UNet网络(KBUNet),可以解决不同运动幅度、运动方向、模糊程度的多样化非均匀运动伪影难于去除的问题,可以同时处理不同特征的运动伪影,保留无伪影区域的组织结构细节,提高影像清晰度,较好地优化CT影像质量,有利于相关影像区域人体组织识别、病灶判别,提高诊断效果;(4) The kernel-based UNet network (KBUNet) provided by the present invention can solve the problem of difficult removal of diverse non-uniform motion artifacts with different motion amplitudes, motion directions, and blur levels, and can handle motion artifacts with different characteristics at the same time. It retains the details of tissue structure in artifact-free areas, improves image clarity, and better optimizes CT image quality, which is beneficial to human tissue identification and lesion identification in relevant image areas, and improves diagnostic results;

(5)本发明提供的基于核的UNet网络(KBUNet),可以高效地去除运动伪影,提高CT影像诊断效率。(5) The kernel-based UNet network (KBUNet) provided by the present invention can effectively remove motion artifacts and improve the efficiency of CT image diagnosis.

附图说明Description of the drawings

图1为本发明一种CT运动伪影去除方法的流程图;Figure 1 is a flow chart of a CT motion artifact removal method according to the present invention;

图2为本发明一种CT运动伪影去除方法中采集与筛选CT影像数据的流程图;Figure 2 is a flow chart of collecting and screening CT image data in a CT motion artifact removal method according to the present invention;

图3为本发明一种CT运动伪影去除方法中所采集数据中运动伪影示意图;Figure 3 is a schematic diagram of motion artifacts in data collected in a CT motion artifact removal method according to the present invention;

图4为本发明一种CT运动伪影去除方法中的仿真运动伪影流程图;Figure 4 is a flow chart of simulated motion artifacts in a CT motion artifact removal method according to the present invention;

图5为采用本发明一种CT运动伪影去除方法进行运动伪影仿真的效果示意图;Figure 5 is a schematic diagram of the effect of motion artifact simulation using a CT motion artifact removal method of the present invention;

图6为本发明一种CT运动伪影去除方法设计的基于核的UNet网络KBUNet结构示意图;Figure 6 is a schematic structural diagram of the kernel-based UNet network KBUNet designed for a CT motion artifact removal method of the present invention;

图7为本发明一种CT运动伪影去除方法中实施例2的运动伪影去除结果比对图;Figure 7 is a comparison chart of motion artifact removal results in Example 2 of a CT motion artifact removal method of the present invention;

图8为本发明一种CT运动伪影去除方法中实施例3的运动伪影去除结果比对图;Figure 8 is a comparison chart of motion artifact removal results in Example 3 of a CT motion artifact removal method of the present invention;

图9为本发明一种CT运动伪影去除方法中实施例4的运动伪影去除结果比对图。Figure 9 is a comparison chart of motion artifact removal results in Example 4 of a CT motion artifact removal method of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below with reference to the drawings and specific embodiments.

本发明一种CT运动伪影去除方法,能够在快速去除肺部运动伪影的同时,很好地保持非伪影区域的组织结构,以解决CT影像中刚性、非刚性运动所造成的多样化、非均匀化的运动伪影。The present invention is a CT motion artifact removal method that can quickly remove lung motion artifacts while well maintaining the tissue structure of non-artifact areas to solve the diversification caused by rigid and non-rigid motion in CT images. , non-uniformized motion artifacts.

本发明一种CT运动伪影去除方法,为一种深度学习方法,模型命名为基于核的UNet网络(Kernel based UNet,KBUNet),主要包括两部分,一部分为核网络(Kernel Net,KNet),另一部分为去模糊网络(DeblurNet,DNet)。The present invention is a CT motion artifact removal method, which is a deep learning method. The model is named Kernel based UNet (KBUNet) and mainly includes two parts. One part is Kernel Net (KNet). The other part is the deblurring network (DeblurNet, DNet).

核网络(KNet)用于将带有伪影的或不清晰的原始CT图像提取分析并得到该图像的模糊程度掩膜图像(mask),将其输出的特征图成为核(kernel)。The kernel network (KNet) is used to extract and analyze the original CT image with artifacts or unclearness and obtain the blur level mask image of the image, and the output feature map is called the kernel.

去模糊网络(DNet)用于综合带有伪影的或不清晰的原始CT图像、核网络(KNet)输出的核(kernel)这两者的特征,综合提取并最终恢复出无伪影的、伪影较少的或组织结构更清晰的去运动伪影CT图像。The deblurring network (DNet) is used to synthesize the features of the original CT image with artifacts or unclearness and the kernel output by the kernel network (KNet), to comprehensively extract and finally restore the artifact-free, Motion artifact-free CT images with fewer artifacts or clearer tissue structures.

实施例1Example 1

本发明一种CT运动伪影去除方法,流程如图1所示;本发明提供的采集(步骤1)与筛选(步骤2)CT影像数据流程图可参考说明书附图2。The present invention provides a method for removing CT motion artifacts. The flow chart is shown in Figure 1. The flow chart of CT image data collection (step 1) and screening (step 2) provided by the present invention can be referred to Figure 2 of the description.

本发明一种CT运动伪影去除方法,具体包括如下步骤:A CT motion artifact removal method of the present invention specifically includes the following steps:

步骤1,采集带运动伪影的CT图像。一般在扫描胸腹部CT影像时,得到的图像数据均带有运动伪影,由于呼吸运动与心脏运动几乎不可避免;Step 1: Collect CT images with motion artifacts. Generally, when scanning CT images of the chest and abdomen, the image data obtained contains motion artifacts, because respiratory and cardiac movements are almost inevitable;

步骤1的详细过程:Detailed process of step 1:

步骤1.1,使用CT仪器扫描人体或体模,得到某一区域螺旋扫描的相应的放射线衰减信号;Step 1.1, use a CT instrument to scan the human body or phantom to obtain the corresponding radiation attenuation signal of a certain area of spiral scanning;

步骤1.2,将放射线衰减信号转换为相关扫描区域中各点位360度探测信号,输出为弦图域图像;Step 1.2, convert the radiation attenuation signal into a 360-degree detection signal at each point in the relevant scanning area, and output it as a chord diagram domain image;

步骤1.3,将弦图域图像通过一定的CT影像重建算法(如滤波反投影算法,FBP算法)重建为常见的CT影像数据(DCM格式数据),便于直观分析被扫描区域。Step 1.3: Reconstruct the chordmap domain image into common CT image data (DCM format data) through a certain CT image reconstruction algorithm (such as filtered back-projection algorithm, FBP algorithm) to facilitate intuitive analysis of the scanned area.

本实施例中:通过步骤1方法采集的CT影像数据包含26个序列,6488个切片,CT影像切片的尺寸为512×512像素。In this embodiment: the CT image data collected through the method in step 1 includes 26 sequences and 6488 slices, and the size of the CT image slices is 512×512 pixels.

采集到的CT影像数据中常包含运动伪影,可参考说明书附图3。附图3中包含2张CT影像切片,图中用方框展示了图中各区域的细节。可以看到CT影像的运动伪影结构复杂、特征多样。其中第一张CT切片的左下角放大图为清晰的无伪影区域,其他7个放大的细节部分均存在不同程度的CT运动伪影,这也说明了CT影像中运动伪影非均匀化,不同区域的运动程度、效果不同。本发明提供的方法可以同时处理不同特征的运动伪影,保留无伪影区域的组织结构细节,提高影像清晰度。The collected CT image data often contains motion artifacts, please refer to Figure 3 of the manual. Figure 3 contains two CT image slices, with boxes showing the details of each area in the image. It can be seen that the motion artifacts in CT images have complex structures and diverse features. The enlarged picture of the lower left corner of the first CT slice shows a clear artifact-free area, and the other seven enlarged details all have CT motion artifacts of varying degrees, which also illustrates the non-uniformity of motion artifacts in CT images. The degree and effect of movement in different areas are different. The method provided by the present invention can simultaneously process motion artifacts with different characteristics, retain the tissue structure details of artifact-free areas, and improve image clarity.

步骤2,人工筛选出不带运动伪影或运动伪影较轻微的清晰CT影像切片;步骤2的详细过程:Step 2: Manually screen out clear CT image slices without motion artifacts or with slight motion artifacts; the detailed process of step 2:

由于人体的呼吸运动与心跳运动几乎不可避免,且部分患者无法在医学检查时自主控制并屏住呼吸,因此通过步骤1中方法扫描得到的CT影像切片很多会带有运动伪影。此外由于CT影像扫描的复杂性与物理特殊性,经常会遇到其他类型的伪影,几乎不存在完全清晰干净的CT影像切片。故本发明在该步骤中去除主要伪影,即运动伪影与金属伪影,步骤2的具体过程为:Since the respiratory and heartbeat movements of the human body are almost inevitable, and some patients are unable to control and hold their breath during medical examinations, many CT image slices scanned by the method in step 1 will contain motion artifacts. In addition, due to the complexity and physical specificity of CT image scanning, other types of artifacts are often encountered, and there are almost no completely clear and clean CT image slices. Therefore, the present invention removes the main artifacts in this step, namely motion artifacts and metal artifacts. The specific process of step 2 is:

步骤2.1,通过人工观察逐个去除每个CT影像序列中头部、腹部CT影像切片,进而提高本发明中相关模型方法关于肺部运动伪影的针对性专一性,改善模型效果;Step 2.1, remove the head and abdomen CT image slices in each CT image sequence one by one through manual observation, thereby improving the specificity of the relevant model method in the present invention regarding lung motion artifacts and improving the model effect;

步骤2.2,由于常规人体组织扫描得到的CT影像切片,其中各体素的CT值的值域在-1000到1000范围内,故保险起见自动剔除包含绝对值大于3000的体素其相应的CT切片;Step 2.2. Since the CT image slices obtained by conventional human tissue scanning have the CT value of each voxel in the range of -1000 to 1000, the corresponding CT slices containing voxels with an absolute value greater than 3000 are automatically eliminated for safety reasons. ;

步骤2.3,通过人工观察逐个去除每个CT影像序列中含有金属伪影的CT影像切片。由于金属伪影会显著影像CT切片其中各体素的CT值分布情况,且金属伪影为CT伪影去除中另一常见、重要、难遇解决的领域,鉴于本发明仅针对运动伪影进行改进和去除,故去除相关带金属伪影的CT影像切片;Step 2.3: Remove CT image slices containing metal artifacts in each CT image sequence one by one through manual observation. Since metal artifacts will significantly affect the CT value distribution of each voxel in the CT slice, and metal artifacts are another common, important, and difficult-to-solve area in CT artifact removal, in view of the fact that this invention only focuses on motion artifacts. Improve and remove, so remove relevant CT image slices with metal artifacts;

步骤2.4,通过人工观察和分析,判别每张CT影像切片中各区域的组织结构,并剔除运动模糊程度稍多、运动伪影较明显的CT影像切片,进而构造清晰的CT影像数据集。Step 2.4, through manual observation and analysis, identify the tissue structure of each area in each CT image slice, and eliminate CT image slices with slightly more motion blur and obvious motion artifacts, and then construct a clear CT image data set.

本实施例中:通过步骤2方法筛选得到的CT影像数据包括26个序列,2902个切片。In this embodiment: the CT image data obtained through the screening in step 2 includes 26 sequences and 2902 slices.

本发明提供的CT影像切片仿真运动伪影流程图(步骤3)可参考说明书附图4。The CT image slice simulation motion artifact flow chart (step 3) provided by the present invention can refer to Figure 4 of the description.

步骤3,通过特定的运动伪影仿真方法,向步骤2中筛选出的清晰CT影像加入人工仿真的运动过程,进而得到带有运动伪影的CT影像切片,获得相应的配对数据集。该步骤得到的配对CT影像数据可用于无监督学习方法;步骤3的详细过程:Step 3: Through a specific motion artifact simulation method, artificially simulated motion processes are added to the clear CT images screened in step 2, and then CT image slices with motion artifacts are obtained, and the corresponding paired data sets are obtained. The paired CT image data obtained in this step can be used for unsupervised learning methods; the detailed process of step 3:

步骤3.1,生成运动严重程度矩阵;Step 3.1, generate motion severity matrix;

步骤3.1.1,生成随机高斯矩阵(矩阵大小为M*M,中心在原点,x,y轴对应两个矩阵),也可用其他分布的随机矩阵,甚至是混合分布,来模拟特定情况下的物体或组织运动状态;Step 3.1.1, generate a random Gaussian matrix (matrix size is M*M, the center is at the origin, and the x and y axes correspond to two matrices). Random matrices from other distributions, or even mixed distributions, can also be used to simulate certain situations. The state of motion of an object or organization;

步骤3.1.2,生成并初始化位移矩阵displacement(矩阵大小为N,其中N<M,x,y轴对应两个矩阵);Step 3.1.2, generate and initialize the displacement matrix displacement (the matrix size is N, where N<M, the x and y axes correspond to two matrices);

步骤3.1.3,随机产生位移矩阵displacement的中心坐标(Cx,Cy);Step 3.1.3, randomly generate the center coordinates (C x ,C y ) of the displacement matrix displacement;

步骤3.1.4,给位移矩阵displacement赋值,该矩阵为为高斯矩阵中坐标(Cx,Cy)所选取的子矩阵,矩阵大小为N*N;Step 3.1.4, assign a value to the displacement matrix, which is the submatrix selected for the coordinates (C x , C y ) in the Gaussian matrix. The matrix size is N*N;

步骤3.1.5,对位移矩阵displacement乘以放缩系数(Dirx,Diry)得到运动严重程度矩阵(x,y轴对应两个矩阵);Step 3.1.5, multiply the displacement matrix by the scaling coefficient (Dir x , Dir y ) to obtain the motion severity matrix (the x and y axes correspond to the two matrices);

步骤3.1.6,将运动严重程度矩阵的数值绝对值范围裁切到max(Dirx,Diry)范围内;Step 3.1.6, cut the numerical absolute value range of the motion severity matrix to the range of max(Dir x ,Dir y );

步骤3.2,生成形变网格mask,具体为:Step 3.2, generate the deformation mesh mask, specifically:

步骤3.2.1,将运动严重程度矩阵加入掩膜mask,用于描述CT影像切片中各体素的运动状况;Step 3.2.1, add the motion severity matrix to the mask to describe the motion status of each voxel in the CT image slice;

步骤3.2.2,使用样条插值对形变后的图像进行插值,得到最终的运动掩膜mask;Step 3.2.2, use spline interpolation to interpolate the deformed image to obtain the final motion mask;

步骤3.3,使用ASTRA重建算法加入人工运动仿真过程,具体为:Step 3.3, use ASTRA reconstruction algorithm to join the artificial motion simulation process, specifically:

步骤3.3.1,构建虚拟探测器。可使用Python中第三方库astra,并用其中的creat_proj_geom()、creat_projector()方法构建虚拟探测器;Step 3.3.1, build virtual detector. You can use the third-party library astra in Python and use its creat_proj_geom() and creat_projector() methods to build a virtual detector;

步骤3.3.2,模拟CT采集过程,具体为:Step 3.3.2, simulate the CT acquisition process, specifically:

步骤3.3.2.1,模拟CT采集中物体运动过程。在程序中模拟CT扫描过程,并在扫描时根据步骤3.2得到的形变网格mask对被扫描物体或组织施加运动,仿真相应物体的运动过程,特别是6自由度非刚性运动。通过边扫描边运动边形变的方式仿真运动;Step 3.3.2.1, simulate the object movement process during CT acquisition. Simulate the CT scanning process in the program, and apply motion to the scanned object or tissue according to the deformation grid mask obtained in step 3.2 during scanning, simulating the motion process of the corresponding object, especially the 6-degree-of-freedom non-rigid motion. Simulate movement by scanning while moving and deforming;

步骤3.3.2.2,模拟CT扫描过程:通过步骤3.3.1中构建的虚拟探测器模拟CT螺旋扫描过程。Step 3.3.2.2, simulate the CT scanning process: simulate the CT spiral scanning process through the virtual detector constructed in step 3.3.1.

步骤3.4,得到带有弹性形变运动伪影的数据,具体为:Step 3.4, obtain data with elastic deformation motion artifacts, specifically:

步骤3.4.1,扫描获取弦图域数据:通过CT扫描得到相应的弦图域数据;Step 3.4.1, scan to obtain the chord map domain data: obtain the corresponding chord map domain data through CT scanning;

步骤3.4.2,弦图域数据重建:通过CT图像重建算法(如滤波反投影算法,FBP算法)得到图像域数据,即各点位对应的CT影像切片数据。Step 3.4.2, Chordogram domain data reconstruction: Obtain image domain data through CT image reconstruction algorithm (such as filtered back-projection algorithm, FBP algorithm), that is, CT image slice data corresponding to each point.

本实施例中:通过步骤3方法仿真得到的CT影像数据包括一套无伪影数据和配套对应的带运动伪影数据,其中每套数据含有130个序列,14510个切片。In this embodiment: the CT image data obtained through the simulation in step 3 includes a set of artifact-free data and a corresponding set of data with motion artifacts, where each set of data contains 130 sequences and 14,510 slices.

本发明运动伪影仿真效果示意图(步骤3)可参考说明书附图5。其中第一排为原始CT影像切片,第二排为通过步骤3所提供运动伪影仿真方法得到的带运动伪影CT影像切片。其中每张图对应的运动幅度、方向、特征随机,提高了数据样本的多样性,生成了配对数据集,有助于自监督学习方法的训练与应用。For a schematic diagram of the motion artifact simulation effect of the present invention (step 3), please refer to Figure 5 of the description. The first row is the original CT image slices, and the second row is the CT image slices with motion artifacts obtained through the motion artifact simulation method provided in step 3. The motion amplitude, direction, and features corresponding to each picture are random, which improves the diversity of data samples and generates paired data sets, which is helpful for the training and application of self-supervised learning methods.

步骤4,使用步骤3得到的配对CT影像数据集训练基于核的UNet网络(KBUNet),其中包括核网络(KNet)和去模糊网络(DNet)两个子网络,并使用贝叶斯方法优化相关深度学习网络模型中的参数;步骤4的详细过程:Step 4: Use the paired CT image data set obtained in step 3 to train the kernel-based UNet network (KBUNet), which includes two sub-networks: the kernel network (KNet) and the deblurring network (DNet), and use the Bayesian method to optimize the correlation depth Learn the parameters in the network model; detailed process of step 4:

步骤4.1,通过贝叶斯方法推导计算出核(kernel)和清晰图像的后验分布变分推断,进而可用于数值计算及程序化;Step 4.1: Calculate the variational inference of the posterior distribution of the kernel and clear image through Bayesian method derivation, which can then be used for numerical calculation and programming;

步骤4.1.1,假设CT运动伪影生成机制对应的图像模糊模型为Step 4.1.1, assume that the image blur model corresponding to the CT motion artifact generation mechanism is

x=k*y+n (1)x=k*y+n (1)

其中,x为清晰图像,y为模糊图像,k为模糊核,n为噪声,*为二维卷积算子;Among them, x is the clear image, y is the blurred image, k is the blur kernel, n is the noise, * is the two-dimensional convolution operator;

步骤4.1.2,假设真实的清晰图像x服从分布z,真实的模糊核k服从分布h;Step 4.1.2, assuming that the real clear image x obeys the distribution z, and the real blur kernel k obeys the distribution h;

步骤4.1.3,假设模糊核k为Dirichlet分布(意味着单张模糊图像对应的模糊核中各元素之和为1):Step 4.1.3, assuming that the blur kernel k is a Dirichlet distribution (meaning that the sum of the elements in the blur kernel corresponding to a single blur image is 1):

其中,Dirichlet分布的参数为π=(π12,……,πM),α=α12,……,αM),对于m=1,……,M,有0<πm<1,αm>0,且Among them, the parameters of the Dirichlet distribution are π = (π 1 , π 2 ,..., π M ), α = α 1 , α 2 ,..., α M ), and for m = 1,..., M, there are 0 <π m <1, α m >0, and

步骤4.1.4,推导出清晰图像分布z、真实的模糊核分布h的后验分布;Step 4.1.4, derive the posterior distribution of the clear image distribution z and the real blur kernel distribution h;

p(z,h|y)∝p(z,h,y)=p(y|z,h)p(z)p(h) (3)p(z,h|y)∝p(z,h,y)=p(y|z,h)p(z)p(h) (3)

步骤4.1.5,推导出清晰图像分布z、真实的模糊核分布h的后验分布的变分推断;Step 4.1.5, derive the variational inference of the posterior distribution of the clear image distribution z and the real blur kernel distribution h;

qΦ(h|y)=Dir(h;ξ(y;Φ)) (5)q Φ (h|y)=Dir(h; ξ(y; Φ)) (5)

其中,Ψ是由h、y推导z的参数,输入为模糊图像y和估计到模糊核h,输出为近似后验(修复后的图像)z,假设z的均值为μi方差为mi。对应的,Φ是由模糊图像y估计模糊核h的参数。Among them, Ψ is the parameter of z derived from h and y. The input is the blur image y and the estimated blur kernel h. The output is the approximate posterior (repaired image) z. It is assumed that the mean value of z is μ i and the variance is m i . Correspondingly, Φ is the parameter of the blur kernel h estimated from the blurred image y.

步骤4.2,推导计算证据下界,进而得到深度学习训练所对应的损失函数,具体为:Step 4.2, derive and calculate the evidence lower bound, and then obtain the loss function corresponding to deep learning training, specifically:

步骤4.2.1,推导出步骤4.1.5对应的证据下界ELBO:Step 4.2.1, derive the evidence lower bound ELBO corresponding to step 4.1.5:

为损失函数,DKL为KL散度。其中: is the loss function, and D KL is the KL divergence. in:

其中,ξi和ki为Dirichlet分布的参数(参考公式2),ψ0为Digamma函数,μi和mi公式(4)中z的均值和方差,ε0为计算得到的误差。Among them, ξ i and k i are the parameters of Dirichlet distribution (refer to formula 2), ψ 0 is the Digamma function, μ i and m i are the mean and variance of z in formula (4), and ε 0 is the calculated error.

步骤4.2.2,根据公式(6)可推导出对应的深度学习优化目标,如下公式(9)所示:In step 4.2.2, the corresponding deep learning optimization objective can be derived according to formula (6), as shown in the following formula (9):

步骤4.2.3,得到对应的深度学习损失函数,如下公式(10)所示:Step 4.2.3, obtain the corresponding deep learning loss function, as shown in the following formula (10):

步骤4.3,构建深度学习模型,并使用步骤2得到的带运动伪影CT影像配对数据集训练本发明中提出的核网络(KNet)和去模糊网络(DNet);Step 4.3, construct a deep learning model, and use the paired data set of CT images with motion artifacts obtained in step 2 to train the kernel network (KNet) and deblurring network (DNet) proposed in the present invention;

本发明设计的基于核的UNet网络KBUNet结构示意图(步骤4)可参考说明书附图6。The schematic structural diagram (step 4) of the core-based UNet network KBUNet designed by the present invention can be referred to Figure 6 of the description.

步骤4.3.1,构建核网络(KNet)。核网络(KNet)使用简单的浅层的深度学习网络(如ResNet、DnCNN)作为基准网络构建。首先通过基准网络来提取带运动伪影CT影像切片的初步特征,通过线性全连接层进一步将该初步特征处理变换为一维特征,再将该一维特征通过softmax()函数变换得到元素值之和等于1的特征(满足步骤4.1.3中关于模糊核服从Dirichlet分布的假设),最后将该特征转换为二维特征,及代表该带运动伪影CT影像切片所对应的模糊核(kernel);Step 4.3.1, build the kernel network (KNet). The kernel network (KNet) is constructed using a simple shallow deep learning network (such as ResNet, DnCNN) as a baseline network. First, the baseline network is used to extract the preliminary features of CT image slices with motion artifacts. The preliminary features are further processed and transformed into one-dimensional features through the linear fully connected layer. The one-dimensional features are then transformed through the softmax() function to obtain the element value. The feature whose sum is equal to 1 (satisfies the assumption in step 4.1.3 that the blur kernel obeys the Dirichlet distribution) is finally converted into a two-dimensional feature and represents the blur kernel (kernel) corresponding to the CT image slice with motion artifacts. ;

本实施例中,模糊核(kernel)数据及核网络(KNet)相关尺寸大小采用101。In this embodiment, the related sizes of the fuzzy kernel data and the kernel network (KNet) are 101.

步骤4.3.2,构建去模糊网络(DNet)。去模糊网络(DNet)使用用于图像恢复的深度学习网络(如U-Net、NAFNet、MPRNet等)作为基准网络构建。Step 4.3.2, build the deblurring network (DNet). The deblurring network (DNet) is built using deep learning networks for image restoration (such as U-Net, NAFNet, MPRNet, etc.) as a baseline network.

步骤4.3.2.1,设计去模糊网络(DNet)基本架构。在基准网络的每一层中加入步骤4.3.1中得到的模糊核(kernel),将基准网络得到的各层特征图与模糊核(kernel)相融合,相当于在图像修复过程中引入运动模糊信息,进而得到更清晰、伪影去除或减弱的CT影像切片;Step 4.3.2.1, design the basic architecture of the deblurring network (DNet). Add the blur kernel (kernel) obtained in step 4.3.1 to each layer of the benchmark network, and fuse the feature maps of each layer obtained by the benchmark network with the blur kernel (kernel), which is equivalent to introducing motion blur in the image repair process. Information, thereby obtaining clearer CT image slices with artifacts removed or weakened;

本实施例中,去模糊网络(DNet)基准网络采用MPRNet,其中的UNet模块采用深度为6层。In this embodiment, the deblurred network (DNet) benchmark network adopts MPRNet, in which the UNet module adopts a depth of 6 layers.

步骤4.3.2.2,设计基准网络各层特征图与模糊核(kernel)融合方法。将模糊核(kernel)与当前层特征图(feature)合并,经过卷积、激活函数操作得到特征I,将模糊核(kernel)与特征I进行卷积操作得到特征II,将特征I经过卷积操作得到特征III,将特征II和特征III进行合并叠加得到最终的融合特征;Step 4.3.2.2, design the fusion method of feature maps and fuzzy kernel of each layer of the benchmark network. Merge the fuzzy kernel (kernel) with the current layer feature map (feature), and obtain feature I through convolution and activation function operations. Convolve the fuzzy kernel (kernel) with feature I to obtain feature II. Convolve feature I The operation is performed to obtain feature III, and feature II and feature III are merged and superimposed to obtain the final fusion feature;

本实施例中,去模糊网络(DNet)中的各个UNet模块、ORSNet模块都融合进了核网络(KNet)输出的模糊核(kernel)数据。In this embodiment, each UNet module and ORSNet module in the deblurring network (DNet) are integrated into the fuzzy kernel (kernel) data output by the kernel network (KNet).

步骤4.3.3,训练核网络(KNet)和去模糊网络(DNet),具体为:Step 4.3.3, train the kernel network (KNet) and deblurring network (DNet), specifically:

步骤4.3.3.1,训练方式。使用步骤3得到的配对CT影像数据作为训练数据,使用步骤4.2给出的损失函数训练核网络(KNet)和去模糊网络(DNet)。Step 4.3.3.1, training method. Use the paired CT image data obtained in step 3 as training data, and use the loss function given in step 4.2 to train the kernel network (KNet) and deblurring network (DNet).

步骤4.3.3.2,训练细节。可以分别训练核网络(KNet)和去模糊网络(DNet),即训练一个网络时冻结另一个网络的参数,中断模型间的连接以及特征融合,使用配对的CT影响切片数据及仿真得到的模糊核(kernel)分别训练两个网络,待训练效果稳定后再整体训练所有参数。也可以直接对两个模型进行整体训练。待模型训练指标数据稳定后可结束训练过程;Step 4.3.3.2, training details. The kernel network (KNet) and the deblurred network (DNet) can be trained separately, that is, when training one network, the parameters of the other network are frozen, the connection between the models and the feature fusion are interrupted, and the paired CT is used to affect the slice data and the blur kernel obtained by simulation. (kernel) train two networks separately, and then train all parameters as a whole after the training effect is stable. You can also directly train the two models as a whole. The training process can be ended after the model training indicator data is stable;

本实施例中,epochs设置为100,batch size设置为8。步骤3中仿真出的14510对配套的CT影像切片按照4:1的比例划分为训练集(11608对CT影像切片)和测试集(2902对CT影像切片),并使用训练集(11608对CT影像切片)训练。训练过程中通过范数裁剪和绝对值裁剪的方法加入梯度裁剪,避免模型参数爆炸。In this example, epochs is set to 100 and batch size is set to 8. The 14510 pairs of matching CT image slices simulated in step 3 are divided into a training set (11608 pairs of CT image slices) and a test set (2902 pairs of CT image slices) in a ratio of 4:1, and the training set (11608 pairs of CT image slices) is used slice) training. During the training process, gradient clipping is added through norm clipping and absolute value clipping to avoid model parameter explosion.

步骤5,使用带运动伪影的CT影像数据来测试评估本发明中的CT运动伪影去除方法的相关模型性能,输出相应的无运动伪影或运动伪影减轻的更清晰的CT影像数据。实现快速高效去除运动伪影的效果。Step 5: Use CT image data with motion artifacts to test and evaluate the relevant model performance of the CT motion artifact removal method in the present invention, and output corresponding clearer CT image data without motion artifacts or with reduced motion artifacts. Achieve the effect of quickly and efficiently removing motion artifacts.

步骤5的详细过程:Detailed process of step 5:

步骤5.1,选取用于测试的CT影像数据集。可使用步骤3得到的未用于模型训练的配对CT影像数据集,便于计算图像指标并衡量模型去模糊效果。也可使用真实的非配对的带运动伪影模糊数据,用于高效去除运动伪影,并评估模糊去除效果;Step 5.1, select the CT image data set for testing. The paired CT image data set obtained in step 3 and not used for model training can be used to calculate image indicators and measure the model deblurring effect. Real unpaired blur data with motion artifacts can also be used to efficiently remove motion artifacts and evaluate the blur removal effect;

本实施例中,测试集选用提前划分好的测试集(2902对CT影像切片),用于评估图像效果并计算相关指标,主要指标有MSE、PSNR、SSIM。最终在测试集上MSE、PSNR、SSIM分别达到了2.83×10-4、36.24、0.96。In this embodiment, the test set selected in advance (2902 pairs of CT image slices) is used to evaluate the image effect and calculate related indicators. The main indicators include MSE, PSNR, and SSIM. Finally, on the test set, MSE, PSNR, and SSIM reached 2.83×10 -4 , 36.24, and 0.96 respectively.

本实施例中,也采用了步骤2.4中剔除的真实带运动伪影进行评估测试。In this embodiment, the real belt motion artifacts eliminated in step 2.4 are also used for evaluation testing.

步骤5.2,通过核网络(KNet)估计出带运动伪影CT影像切片对应的核(kernel);Step 5.2, estimate the kernel corresponding to the CT image slice with motion artifacts through the kernel network (KNet);

步骤5.3,根据带运动伪影CT影像切片和核网络(KNet)估计出的核(kernel),通过去模糊网络(DeblurNet,DNet)综合提取并最终恢复出无伪影的、伪影较少的或组织结构更清晰的去运动伪影CT图像。Step 5.3: Based on the CT image slices with motion artifacts and the kernel estimated by the kernel network (KNet), the artifact-free and less artifact-free images are comprehensively extracted and finally restored through the deblurring network (DeblurNet, DNet). Or CT images with clearer tissue structure and motion artifact removal.

实施例2Example 2

可参考说明书附图7。其中第一行第一张图为步骤2中的原始无运动伪影CT影像切片;第二行第一张图为步骤3中仿真得到的配对的带运动伪影CT影像切片;第一行第二张图为通过DnCNN模型训练得到的伪影去除效果图;第一行第三张图为通过UNet模型训练得到的伪影去除效果图;第一行第四张图为通过MPRNet模型训练得到的伪影去除效果图;第二行第二张图为通过DeepDeblurNet模型训练得到的伪影去除效果图;第二行第三张图为通过NAFNet模型训练得到的伪影去除效果图;第二行第四张图为通过扩散模型方法训练得到的伪影去除效果图;第一行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,但数据集采用分块数据训练,即将步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片数据集中的每个切片数据,拆分为64×64像素尺寸的CT影像切片;第二行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,其中所采用数据集为步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片。Please refer to Figure 7 of the manual. The first picture in the first row is the original CT image slice without motion artifacts in step 2; the first picture in the second row is the paired CT image slice with motion artifacts simulated in step 3; the first picture in the first row is The two pictures are the artifact removal effects obtained through DnCNN model training; the third picture in the first row is the artifact removal effect obtained through UNet model training; the fourth picture in the first row is the artifact removal effect obtained through MPRNet model training Artifact removal effect picture; the second picture in the second row is the artifact removal effect picture obtained through DeepDeblurNet model training; the third picture in the second row is the artifact removal effect picture obtained through NAFNet model training; the second row The four pictures are the artifact removal effect pictures obtained through the diffusion model method training; the fifth picture in the first row is the artifact removal effect picture obtained through the KBUNet model training provided by the present invention, but the data set is trained using block data. That is, each slice data in the original 512×512 pixel size CT image slice data set in step 4.3.3.2 is split into 64×64 pixel size CT image slices; the fifth picture in the second row is provided by the present invention Artifact removal effect diagram obtained by KBUNet model training, where the data set used is the original 512×512 pixel size CT image slice in step 4.3.3.2.

实施例3Example 3

可参考说明书附图8。其中第一行第一张图为步骤2中的原始无运动伪影CT影像切片;第二行第一张图为步骤3中仿真得到的配对的带运动伪影CT影像切片;第一行第二张图为通过DnCNN模型训练得到的伪影去除效果图;第一行第三张图为通过UNet模型训练得到的伪影去除效果图;第一行第四张图为通过MPRNet模型训练得到的伪影去除效果图;第二行第二张图为通过DeepDeblurNet模型训练得到的伪影去除效果图;第二行第三张图为通过NAFNet模型训练得到的伪影去除效果图;第二行第四张图为通过扩散模型方法训练得到的伪影去除效果图;第一行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,但数据集采用分块数据训练,即将步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片数据集中的每个切片数据,拆分为64×64像素尺寸的CT影像切片;第二行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,其中所采用数据集为步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片。Please refer to Figure 8 of the manual. The first picture in the first row is the original CT image slice without motion artifacts in step 2; the first picture in the second row is the paired CT image slice with motion artifacts simulated in step 3; the first picture in the first row is The two pictures are the artifact removal effects obtained through DnCNN model training; the third picture in the first row is the artifact removal effect obtained through UNet model training; the fourth picture in the first row is the artifact removal effect obtained through MPRNet model training Artifact removal effect picture; the second picture in the second row is the artifact removal effect picture obtained through DeepDeblurNet model training; the third picture in the second row is the artifact removal effect picture obtained through NAFNet model training; the second row The four pictures are the artifact removal effect pictures obtained through the diffusion model method training; the fifth picture in the first row is the artifact removal effect picture obtained through the KBUNet model training provided by the present invention, but the data set is trained using block data. That is, each slice data in the original 512×512 pixel size CT image slice data set in step 4.3.3.2 is split into 64×64 pixel size CT image slices; the fifth picture in the second row is provided by the present invention Artifact removal effect diagram obtained by KBUNet model training, where the data set used is the original 512×512 pixel size CT image slice in step 4.3.3.2.

实施例4Example 4

可参考说明书附图9。其中第一行第一张图为步骤2中的原始无运动伪影CT影像切片;第二行第一张图为步骤3中仿真得到的配对的带运动伪影CT影像切片;第一行第二张图为通过DnCNN模型训练得到的伪影去除效果图;第一行第三张图为通过UNet模型训练得到的伪影去除效果图;第一行第四张图为通过MPRNet模型训练得到的伪影去除效果图;第二行第二张图为通过DeepDeblurNet模型训练得到的伪影去除效果图;第二行第三张图为通过NAFNet模型训练得到的伪影去除效果图;第二行第四张图为通过扩散模型方法训练得到的伪影去除效果图;第一行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,但数据集采用分块数据训练,即将步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片数据集中的每个切片数据,拆分为64×64像素尺寸的CT影像切片;第二行第五张图为通过本发明提供的KBUNet模型训练得到的伪影去除效果图,其中所采用数据集为步骤4.3.3.2中原始的512×512像素尺寸的CT影像切片。Please refer to Figure 9 of the manual. The first picture in the first row is the original CT image slice without motion artifacts in step 2; the first picture in the second row is the paired CT image slice with motion artifacts simulated in step 3; the first picture in the first row is The two pictures are the artifact removal effects obtained through DnCNN model training; the third picture in the first row is the artifact removal effect obtained through UNet model training; the fourth picture in the first row is the artifact removal effect obtained through MPRNet model training Artifact removal effect picture; the second picture in the second row is the artifact removal effect picture obtained through DeepDeblurNet model training; the third picture in the second row is the artifact removal effect picture obtained through NAFNet model training; the second row The four pictures are the artifact removal effect pictures obtained through the diffusion model method training; the fifth picture in the first row is the artifact removal effect picture obtained through the KBUNet model training provided by the present invention, but the data set is trained using block data. That is, each slice data in the original 512×512 pixel size CT image slice data set in step 4.3.3.2 is split into 64×64 pixel size CT image slices; the fifth picture in the second row is provided by the present invention Artifact removal effect diagram obtained by KBUNet model training, where the data set used is the original 512×512 pixel size CT image slice in step 4.3.3.2.

实施例2~4对应的说明书附图7、8、9,展示了胸部不同部位CT影像切片的运动伪影去除效果。可以看出,本发明提供的基于核的UNet网络(KBUNet),充分提取运动模糊信息,相比其他方法,可以显著提高CT影像去运动伪影效果;可以解决不同运动幅度、运动方向、模糊程度的多样化非均匀运动伪影难于去除的问题,可以同时处理不同特征的运动伪影,保留无伪影区域的组织结构细节,提高影像清晰度,较好地优化CT影像质量,有利于相关影像区域人体组织识别、病灶判别,提高诊断效果;也可以高效地去除运动伪影,提高CT影像诊断效率。Figures 7, 8, and 9 of the description corresponding to Embodiments 2 to 4 show the motion artifact removal effect of CT image slices of different parts of the chest. It can be seen that the kernel-based UNet network (KBUNet) provided by the present invention fully extracts motion blur information. Compared with other methods, it can significantly improve the effect of removing motion artifacts in CT images; it can solve different motion amplitudes, motion directions, and blur degrees. It solves the problem of difficult removal of diverse non-uniform motion artifacts. It can handle motion artifacts with different characteristics at the same time, retain the structural details of artifact-free areas, improve image clarity, and better optimize CT image quality, which is beneficial to related images. It can identify regional human tissues and identify lesions to improve the diagnostic effect; it can also effectively remove motion artifacts and improve the efficiency of CT imaging diagnosis.

通过上述方式,本发明公开一种CT运动伪影去除方法,可以高效去除运动伪影,提高伪影去除效果,提升CT影像质量,以满足实际使用需求。Through the above method, the present invention discloses a CT motion artifact removal method, which can efficiently remove motion artifacts, improve the artifact removal effect, and improve CT image quality to meet actual use needs.

本发明提供的方法是一种图像恢复方法,属于基于图像的(后)重建算法,直接应用于常规(非门控)扫描条件下重建得到的医学影像,无需硬件辅助和图像配准,速度快,可以得到较好的图像恢复效果,节省了大量的计算资源。The method provided by the invention is an image restoration method, which belongs to the image-based (post-) reconstruction algorithm and is directly applied to medical images reconstructed under conventional (non-gated) scanning conditions. It does not require hardware assistance and image registration, and is fast. , can obtain better image restoration effects and save a lot of computing resources.

Claims (10)

1. The CT motion artifact removal method is characterized by comprising the following steps of:
step 1, acquiring CT images with motion artifacts;
step 2, screening out clear CT image slices;
step 3, adding an artificial simulation motion process to the clear CT images screened in the step 2, so as to obtain CT image slices with motion artifacts, and obtaining paired CT image data sets;
step 4, training a core-based UNet network model by using the paired CT image data set obtained in the step 3;
and 5, testing the core-based UNet network model trained in the step 4 by using CT image data with motion artifacts, and outputting clear CT image data.
2. The method for removing CT motion artifacts according to claim 1, wherein the specific procedure of step 1 is as follows:
step 1.1, scanning a human body or a phantom by using a CT instrument to obtain a corresponding radiation attenuation signal of helical scanning of a certain area;
step 1.2, converting the radiation attenuation signals obtained in the step 1.1 into 360-degree detection signals of each point in a relevant scanning area, and outputting the detection signals as chord domain images;
and step 1.3, reconstructing the chord domain image obtained in the step 1.2 into CT image data through a CT image reconstruction algorithm.
3. The method for removing CT motion artifacts according to claim 2, wherein the specific process of step 2 is as follows:
step 2.1, removing head and abdomen CT image slices in each CT image sequence one by one through manual observation;
step 2.2, eliminating CT image slices containing voxels with absolute values greater than 3000;
step 2.3, removing CT image slices containing metal artifacts in each CT image sequence one by one;
and 2.4, judging the tissue structure of each region in each CT image slice, and removing the blurred CT image slices with motion artifacts so as to construct a clear CT image data set.
4. The method for removing CT motion artifacts according to claim 1, wherein the specific procedure in step 3 is as follows:
step 3.1, generating a motion severity matrix;
step 3.2, generating a deformation grid mask based on the motion severity matrix generated in the step 1;
and 3.3, adding an artificial motion simulation process by using an ASTRA reconstruction algorithm to obtain data with elastic deformation motion artifacts.
5. The method of claim 4, wherein the specific process of step 3.1 is as follows:
step 3.1.1, generating a random Gaussian matrix to simulate the motion state of an object or tissue, wherein the matrix is M, the center is at the origin, and the x and y axes correspond to the two matrices;
step 3.1.2, generating and initializing a displacement matrix displacement, wherein the size of the matrix is N, and N < M, and x and y axes correspond to two matrices;
step 3.1.3, randomly generating the center coordinates (C x ,C y );
Step 3.1.4, assigning a displacement matrix displacement, which is the coordinates (C x ,C y ) The size of the selected submatrix is N;
step 3.1.5 multiplying the displacement matrix displacement by the scaling factor (Dir x ,Dir y ) Obtaining a motion severity matrix, wherein the x-axis and the y-axis correspond to the two matrices;
step 3.1.6, clipping the absolute value range of the motion severity matrix to max (Dir x ,Dir y ) Within the range.
6. The method according to claim 5, wherein the specific process of step 3.2 is:
step 3.2.1, adding a motion severity matrix into a mask for describing the motion condition of each voxel in the CT image slice;
and 3.2.2, interpolating the deformed image by using spline interpolation to obtain a final motion mask.
7. The method of claim 5, wherein the specific process of step 3.3 is:
step 3.3.1, constructing a virtual detector;
step 3.3.2, simulating a CT acquisition process through the virtual detector constructed in the step 3.3.1;
step 3.3.3, obtaining corresponding chord domain data through CT scanning;
and 3.3.4, obtaining image domain data, namely CT image slice data corresponding to each point position, through a CT image reconstruction algorithm.
8. The method for removing CT motion artifacts according to claim 1, wherein the specific procedure of step 4 is as follows:
step 4.1, deducing and calculating kernel and posterior distribution variation inference of the clear image through a Bayesian method;
step 4.2, deducing and calculating a lower bound of evidence, and further obtaining a loss function corresponding to the deep learning training;
and 4.3, constructing a core-based UNet network model, and training the constructed core-based UNet network model by using the CT image pairing data set with the motion artifact obtained in the step 2.
9. The method of claim 8, wherein the specific process of step 4.3 is:
step 4.3.1, constructing a kernel network KNet, constructing the kernel network KNet by using a deep learning network as a reference network, firstly extracting preliminary features of CT image slices with motion artifacts through the reference network, further processing and converting the preliminary features into one-dimensional features through a linear full-connection layer, then converting the one-dimensional features into features with the sum of element values equal to 1 through softmax () function, and finally converting the features into two-dimensional features and fuzzy kernel corresponding to the CT image slices with the motion artifacts;
step 4.3.2, constructing a deblurring network DNet, wherein the deblurring network DNet is constructed by using a deep learning network for image recovery as a reference network;
and 4.3.3, training the kernel network KNet and the deblurring network DNet by using the loss function given in the step 4.2 by using the paired CT image data obtained in the step 3 as training data.
10. The method of claim 9, wherein the specific process of step 5 is:
step 5.1, selecting the paired CT image data set which is not used for model training in the step 3 as a CT image data set for testing;
step 5.2, estimating a kernel corresponding to the CT image slice with the motion artifact through a kernel network KNet in the kernel-based UNet network model;
and 5.3, comprehensively extracting and finally recovering the artifact-free or clear-tissue-structure motion artifact-free CT image through a deblurring network according to the kernel estimated by the motion artifact-free CT image slice and the kernel network KNet.
CN202311417580.XA 2023-10-30 2023-10-30 A CT motion artifact removal method Pending CN117475018A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311417580.XA CN117475018A (en) 2023-10-30 2023-10-30 A CT motion artifact removal method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311417580.XA CN117475018A (en) 2023-10-30 2023-10-30 A CT motion artifact removal method

Publications (1)

Publication Number Publication Date
CN117475018A true CN117475018A (en) 2024-01-30

Family

ID=89635791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311417580.XA Pending CN117475018A (en) 2023-10-30 2023-10-30 A CT motion artifact removal method

Country Status (1)

Country Link
CN (1) CN117475018A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118298070A (en) * 2024-06-06 2024-07-05 深圳市计量质量检测研究院(国家高新技术计量站、国家数字电子产品质量监督检验中心) Nuclear magnetic resonance artifact removal method and system based on multi-scale neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118298070A (en) * 2024-06-06 2024-07-05 深圳市计量质量检测研究院(国家高新技术计量站、国家数字电子产品质量监督检验中心) Nuclear magnetic resonance artifact removal method and system based on multi-scale neural network

Similar Documents

Publication Publication Date Title
US11120582B2 (en) Unified dual-domain network for medical image formation, recovery, and analysis
You et al. CT super-resolution GAN constrained by the identical, residual, and cycle learning ensemble (GAN-CIRCLE)
JP6855223B2 (en) Medical image processing device, X-ray computer tomographic imaging device and medical image processing method
Gajera et al. CT-scan denoising using a charbonnier loss generative adversarial network
CN111540025B (en) Predicting images for image processing
KR20200130374A (en) Systems and methods for creating thin image slices from thick image slices
US20100266188A1 (en) Chest x-ray registration, subtraction and display
WO2014172421A1 (en) Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization
CN111260748B (en) Digital synthesis X-ray tomography method based on neural network
AU2019271915A1 (en) Method and system for motion correction in CT imaging
Wang et al. A review of deep learning CT reconstruction from incomplete projection data
US7778493B2 (en) Pixelation reconstruction for image resolution and image data transmission
Chen et al. Deep learning-based algorithms for low-dose CT imaging: A review
US7978886B2 (en) System and method for anatomy based reconstruction
CN113592745A (en) Unsupervised MRI image restoration method based on antagonism domain self-adaptation
CN117475018A (en) A CT motion artifact removal method
CN114202464B (en) X-ray CT local high-resolution imaging method and device based on deep learning
CN113344876A (en) Deformable registration method between CT and CBCT
Khodajou-Chokami et al. PARS-NET: a novel deep learning framework using parallel residual conventional neural networks for sparse-view CT reconstruction
Slavine et al. Phantom and preclinical studies for image improvement in clinical CT
Bai et al. Deep High-Resolution Network for Low Dose X-ray CT Denoising
CN113554729B (en) A CT image reconstruction method and system
Kang et al. Denoising low-dose ct images using a multi-layer convolutional analysis-based sparse encoder network
CN116402954A (en) Spine three-dimensional structure reconstruction method based on deep learning
Tran 3D image analysis with variational methods and wavelets: applications to medical image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination