[go: up one dir, main page]

CN109584254B - Heart left ventricle segmentation method based on deep full convolution neural network - Google Patents

Heart left ventricle segmentation method based on deep full convolution neural network Download PDF

Info

Publication number
CN109584254B
CN109584254B CN201910012180.8A CN201910012180A CN109584254B CN 109584254 B CN109584254 B CN 109584254B CN 201910012180 A CN201910012180 A CN 201910012180A CN 109584254 B CN109584254 B CN 109584254B
Authority
CN
China
Prior art keywords
left ventricle
output
image
segmentation
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910012180.8A
Other languages
Chinese (zh)
Other versions
CN109584254A (en
Inventor
刘华锋
陈明强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910012180.8A priority Critical patent/CN109584254B/en
Publication of CN109584254A publication Critical patent/CN109584254A/en
Application granted granted Critical
Publication of CN109584254B publication Critical patent/CN109584254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于深层全卷积神经网络的心脏左心室分割方法,该方法将深度学习的思想引入到心脏磁共振短轴图像左心室分割中,其过程主要分为训练和预测两个阶段:在训练阶段将预处理后的128×128大小的心脏磁共振图像作为输入,将人工处理好的标签作为网络的标签用于计算误差,随着训练迭代次数的增加,训练集误差和验证集误差逐渐下降;在测试阶段,将测试集中的数据输入到训练好的模型中,最终网络输出对每个像素的预测,生成分割结果。本发明从数据驱动的角度实现心脏磁共振短轴图像的分割,有效地解决了需要人工描绘轮廓的费时费力问题,可以克服传统图像分割算法的缺点,实现高精度和高鲁棒性的左心室分割。

Figure 201910012180

The invention discloses a heart left ventricle segmentation method based on a deep fully convolutional neural network. The method introduces the idea of deep learning into the left ventricle segmentation of cardiac magnetic resonance short-axis images. The process is mainly divided into two parts: training and prediction. Stage: In the training stage, the preprocessed 128×128 cardiac magnetic resonance image is used as input, and the manually processed label is used as the label of the network to calculate the error. As the number of training iterations increases, the training set error and verification The set error gradually decreases; in the test phase, the data in the test set is input into the trained model, and finally the network outputs a prediction for each pixel to generate a segmentation result. The invention realizes the segmentation of cardiac magnetic resonance short-axis images from the perspective of data drive, effectively solves the time-consuming and labor-intensive problem of manually drawing contours, can overcome the shortcomings of traditional image segmentation algorithms, and realizes high-precision and high-robust left ventricle segmentation.

Figure 201910012180

Description

一种基于深层全卷积神经网络的心脏左心室分割方法A Heart Left Ventricle Segmentation Method Based on Deep Fully Convolutional Neural Network

技术领域technical field

本发明属于医学图像分析技术领域,具体涉及一种基于深层全卷积神经网络的心脏左心室分割方法。The invention belongs to the technical field of medical image analysis, and in particular relates to a heart left ventricle segmentation method based on a deep fully convolutional neural network.

背景技术Background technique

近年来,心血管疾病已经成为人类生命健康的头号杀手之一,随着人民生活水平的提高和现代医学的快速发展,心血管疾病的早期诊断和风险评估成为了提高人类生活水平的重要条件。另外,随着医学技术的不断进步,能够对心脏进行动态成像的影像设备主要有磁共振成像(Magnetic Resonance Imaging,MRI)、计算机断层成像(X-RayComputerTomography,CT)和超声成像(Ultrasonic Imaging,US)等。心脏磁共振成像具有较好的软组织对比度、无放射性、无需注射或服用示踪剂、和能够任意平面进行成像的能力。In recent years, cardiovascular disease has become one of the number one killers of human life and health. With the improvement of people's living standards and the rapid development of modern medicine, early diagnosis and risk assessment of cardiovascular diseases have become important conditions for improving human living standards. In addition, with the continuous advancement of medical technology, imaging equipment capable of dynamic imaging of the heart mainly includes Magnetic Resonance Imaging (MRI), Computed Tomography (X-Ray Computer Tomography, CT) and Ultrasonic Imaging (Ultrasonic Imaging, US )Wait. Cardiac magnetic resonance imaging has good soft tissue contrast, no radioactivity, no need to inject or take tracers, and the ability to image in any plane.

然而定量地分析心脏整体和局部的功能包括心室容积、射血分数、心肌质量等临床参数,还需要依赖短轴影像中精确的左心室(LV)和右心室(RV)的心内膜和心外膜轮廓。如果这些轮廓需要手工描绘,那将是一项耗时且乏味的任务,而且容易引起观察者本身和观察者间的高度可变性。因此,需要一种快速、准确、可重复和完全自动化的心脏分割方法来帮助诊断心血管疾病。However, to quantitatively analyze the overall and local functions of the heart, including clinical parameters such as ventricular volume, ejection fraction, myocardial mass, etc., also needs to rely on accurate left ventricle (LV) and right ventricle (RV) endocardium and cardiac function in short-axis images. Outline of adventitia. If these contours had to be drawn by hand, it would be a time-consuming and tedious task, and would be prone to high intra-observer and inter-observer variability. Therefore, there is a need for a fast, accurate, reproducible, and fully automated cardiac segmentation method to aid in the diagnosis of cardiovascular diseases.

对于磁共振心脏影像,心肌组织的灰度值与心脏周围组织的灰度值十分接近,这对心脏左心室的分割带来挑战,目前国内常见的分割算法包括水平集分割算法、区域生长分割算法、阈值分割算法,然而这些分割算法的精度依然不高,鲁棒性也不强。近几年,随着硬件水平与技术的提高,基于深度学习的图像分割算法已经在很多的领域超越了传统的图像分割分割算法。其中包括FCN架构(Tran P V.A Fully Convolutional Neural Networkfor Cardiac Segmentation in Short-Axis MRI[J].2016.)和UNet架构(Ronneberger O,FischerP,Brox T.U-Net:Convolutional Networks for Biomedical ImageSegmentation[J].2015.)用于左心室分割;FCN背后的一般思想是利用下采样路径来学习各种空间尺度的相关特征,然后利用上采样路径来组合像素级预测的特征,然而当心脏切片由于其较小的尺寸而处于心脏的顶点或心脏收缩末期时,网络没有克服这一分割的难度,这是因为在网络中的最大化层的缩减处理期间,精细对象信息可能丢失。UNet是医学影像中常用的分割模型之一,该网络通过多次反卷积得到分割图,另外还在反卷积上采样的时候加入了网络前端对应大小的卷积层信息,使得更多细节得以保留,但该网络没有对分割边界的像素点给予更多的关注,同样会在心脏的顶点或收缩末期存在分割精度较低的问题。For magnetic resonance cardiac images, the gray value of myocardial tissue is very close to the gray value of surrounding tissue, which brings challenges to the segmentation of the left ventricle of the heart. At present, the common segmentation algorithms in China include level set segmentation algorithm and region growing segmentation algorithm , Threshold segmentation algorithm, however, the accuracy of these segmentation algorithms is still not high, and the robustness is not strong. In recent years, with the improvement of hardware level and technology, image segmentation algorithms based on deep learning have surpassed traditional image segmentation algorithms in many fields. These include FCN architecture (Tran P V.A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI[J].2016.) and UNet architecture (Ronneberger O, FischerP, Brox T.U-Net: Convolutional Networks for Biomedical ImageSegmentation[J].2015 .) for left ventricle segmentation; the general idea behind FCN is to use the downsampling path to learn relevant features at various spatial scales, and then use the upsampling path to combine the features predicted at the pixel level, however when heart slices are smaller due to their The network does not overcome this segmentation difficulty when the size is at the apex or end-systole of the heart, since fine object information may be lost during the downscaling process at the maximization layers in the network. UNet is one of the commonly used segmentation models in medical imaging. The network obtains the segmentation map through multiple deconvolutions. In addition, the convolution layer information of the corresponding size at the front end of the network is added when the deconvolution is sampled, so that more details can be obtained. It is preserved, but the network does not pay more attention to the pixels of the segmentation boundary, and there will also be a problem of low segmentation accuracy at the apex or end-systole of the heart.

发明内容Contents of the invention

鉴于上述,本发明提出了一种基于深层全卷积神经网络的心脏左心室分割方法,其采用的分割模型以全图像作为输入,以手动分割作为标签,网络可以端到端有效地训练,最后对图像的每个像素进行预测,实现左右心室的分割,用以克服传统图像分割算法的缺点,实现高精度和高鲁棒性的左心室分割。In view of the above, the present invention proposes a heart left ventricle segmentation method based on a deep fully convolutional neural network. The segmentation model used takes the full image as input and manual segmentation as a label. The network can be effectively trained end-to-end, and finally Predict each pixel of the image to realize the segmentation of the left and right ventricles, so as to overcome the shortcomings of traditional image segmentation algorithms and achieve high-precision and high-robust left ventricle segmentation.

一种基于深层全卷积神经网络的心脏左心室分割方法,包括如下步骤:A heart left ventricle segmentation method based on a deep fully convolutional neural network, comprising the following steps:

(1)获取受试者的心脏磁共振短轴图像并采用人工方式在图像中手动标记出心脏左心室的轮廓线,并构造与心脏磁共振短轴图像大小相同的二值分割图像,该二值分割图像中心脏左心室轮廓及其内部像素值均为1,轮廓外部像素值均为0;(1) Obtain the cardiac magnetic resonance short-axis image of the subject and manually mark the contour line of the left ventricle in the image, and construct a binary segmentation image with the same size as the cardiac magnetic resonance short-axis image. In the value segmentation image, the contour of the left ventricle of the heart and its internal pixel values are all 1, and the external pixel values of the contour are all 0;

(2)针对不同受试者采用步骤(1)的方法获得大量样本,每一样本包括受试者的心脏磁共振短轴图像及其对应的二值分割图像;将所有样本按比例划分为训练集、验证集和测试集;(2) A large number of samples are obtained by using the method of step (1) for different subjects, each sample includes the short-axis cardiac magnetic resonance image of the subject and its corresponding binary segmentation image; all samples are divided into training set, validation set and test set;

(3)利用训练集样本中的心脏磁共振短轴图像作为全卷积神经网络的输入,二值分割图像作为全卷积神经网络输出的真值标签,进而对该神经网络进行训练,最终训练完成后得到心脏左心室分割模型;(3) Use the cardiac magnetic resonance short-axis image in the training set sample as the input of the fully convolutional neural network, and the binary segmentation image as the true value label output by the fully convolutional neural network, and then train the neural network, and finally train After completion, the segmentation model of the left ventricle of the heart is obtained;

(4)将测试集样本中的心脏磁共振短轴图像输入至心脏左心室分割模型,即可得到关于心脏左心室轮廓的二值分割图像,进而将该二值分割图像与测试集样本中的二值分割图像进行比对。(4) Input the cardiac magnetic resonance short-axis image in the test set sample into the heart left ventricle segmentation model, and then obtain the binary segmentation image about the outline of the heart left ventricle, and then compare the binary segmentation image with the test set sample Binary segmented images for comparison.

进一步地,所述步骤(1)中获取受试者的心脏磁共振短轴图像,即通过磁共振仪器对受试者心脏同时做冠、矢、轴三个方向定位成像,成像范围从心底及大血管根部到心尖部,并从中筛选出心脏磁共振短轴图像。Further, in the step (1), the cardiac magnetic resonance short-axis image of the subject is obtained, that is, the subject's heart is simultaneously positioned and imaged in three directions of coronal, sagittal, and axial by the magnetic resonance instrument, and the imaging range is from the bottom of the heart to From the root of the great vessels to the apex, from which short-axis cardiac magnetic resonance images are screened.

进一步地,所述步骤(1)中构造二值分割图像的具体操作过程为:对于手动标记出的心脏左心室轮廓线,将其表示成神经网络可识别的标记图,即一张与心脏磁共振短轴图像大小相同的二值分割图像,其中属于轮廓线及其内部的像素点为目标类且像素值均为1,轮廓线外部的像素点为背景类且像素值均为0;所述训练集、验证集和测试集中的样本数量比例为1:1:1。Further, the specific operation process of constructing the binary segmentation image in the step (1) is as follows: for the manually marked contour line of the left ventricle of the heart, it is expressed as a marked map recognizable by the neural network, that is, a map with the magnetic field of the heart A binary segmentation image of the same size as the resonance short-axis image, in which the pixels belonging to the contour line and its interior are the target class and the pixel values are all 1, and the pixels outside the contour line are the background class and the pixel values are all 0; the The ratio of the number of samples in the training set, validation set, and test set is 1:1:1.

进一步地,所述步骤(3)中的全卷积神经网络包含一个残差网络Resnet 50、四个预测层P1~P4和四个反卷积层D1~D4,所述残差网络Resnet 50从输入到输出依次由一个卷积层C、一个池化层和四个残差阶段L1~L4级联而成,卷积层C的输入即为整个神经网络的输入,残差阶段L1和残差阶段L4均由3个残差结构级联而成,残差阶段L2则由4个残差结构级联而成,残差阶段L3则由6个残差结构级联而成,所述残差结构由三个卷积层C1~C3级联而成,其中卷积层C1的输入与卷积层C3的输出叠加后作为残差结构的输出;残差阶段L4的输出与预测层P1的输入相连,预测层P1的输出反卷积层D1的输入相连,残差阶段L3的输出与预测层P2的输入相连,预测层P2的输出与反卷积层D1的输出叠加后作为反卷积层D2的输入,残差阶段L2的输出与预测层P3的输入相连,预测层P3的输出与反卷积层D2的输出叠加后作为反卷积层D3的输入,残差阶段L1的输出与预测层P4的输入相连,预测层P4的输出与反卷积层D3的输出叠加后作为反卷积层D4的输入,反卷积层D4的输出即为整个神经网络的输出;所述预测层P1~P4均包含3×3的卷积核,步长为1,填充为1,激活函数为Relu;所述反卷积层D1~D4均设置卷积核使得输入维度扩大一倍。Further, the fully convolutional neural network in the step (3) includes a residual network Resnet 50, four prediction layers P1-P4 and four deconvolution layers D1-D4, and the residual network Resnet 50 is obtained from The input to the output is sequentially composed of a convolutional layer C, a pooling layer and four residual stages L1~L4. The input of the convolutional layer C is the input of the entire neural network, and the residual stage L1 and residual Stage L4 is composed of 3 residual structures cascaded, residual stage L2 is composed of 4 residual structures, and residual stage L3 is composed of 6 residual structures. The structure is composed of three convolutional layers C1~C3 cascaded, where the input of the convolutional layer C1 and the output of the convolutional layer C3 are superimposed as the output of the residual structure; the output of the residual stage L4 and the input of the prediction layer P1 Connected, the output of the prediction layer P1 is connected to the input of the deconvolution layer D1, the output of the residual stage L3 is connected to the input of the prediction layer P2, and the output of the prediction layer P2 is superimposed with the output of the deconvolution layer D1 as a deconvolution layer The input of D2, the output of the residual stage L2 is connected to the input of the prediction layer P3, the output of the prediction layer P3 and the output of the deconvolution layer D2 are superimposed as the input of the deconvolution layer D3, the output of the residual stage L1 and the prediction The input of the layer P4 is connected, and the output of the prediction layer P4 is superimposed with the output of the deconvolution layer D3 as the input of the deconvolution layer D4, and the output of the deconvolution layer D4 is the output of the entire neural network; the prediction layer P1 ~P4 all include a 3×3 convolution kernel, the step size is 1, the padding is 1, and the activation function is Relu; the deconvolution layers D1~D4 are all equipped with convolution kernels to double the input dimension.

进一步地,所述步骤(3)中对全卷积神经网络进行训练的过程为:首先将训练集样本中的心脏磁共振短轴图像逐一输入至全卷积神经网络中,计算全卷积神经网络每一次输出结果与对应真值标签之间的损失函数L,以损失函数L最小为目标通过反向传播法对全卷积神经网络中的参数不断进行优化,最终训练完成后确立的全卷积神经网络即为心脏左心室分割模型。Further, the process of training the fully convolutional neural network in the step (3) is as follows: first, input the cardiac magnetic resonance short-axis images in the training set samples into the fully convolutional neural network one by one, and calculate the fully convolutional neural network. The loss function L between each output result of the network and the corresponding true value label, with the goal of minimizing the loss function L, continuously optimizes the parameters in the full convolutional neural network through the back propagation method, and finally establishes the full volume after the training is completed. The product neural network is the heart left ventricle segmentation model.

进一步地,所述损失函数L的表达式如下:Further, the expression of the loss function L is as follows:

Figure BDA0001937749330000041
Figure BDA0001937749330000041

其中:W为二值分割图像的宽,H为二值分割图像的高,N为类别数且N=2,P(i,j,n)为全卷积神经网络输出图像中第j行第i列像素为类别n的概率,G(i,j,n)为对应训练集样本的二值分割图像中第j行第i列像素值,类别1表示像素为心脏左心室区域,类别2表示像素为背景区域。Among them: W is the width of the binary segmentation image, H is the height of the binary segmentation image, N is the number of categories and N=2, P (i, j, n) is the jth row of the fully convolutional neural network output image The probability that the i-column pixel is the category n, G (i,j,n) is the pixel value of the j-th row and the i-column in the binary segmentation image corresponding to the training set sample, category 1 indicates that the pixel is the left ventricle area of the heart, category 2 indicates Pixels are the background area.

优选地,所述步骤(3)中对于训练完成后得到的心脏左心室分割模型,利用验证集样本对其进行验证,通过验证对模型参数进行微调,以进一步提高模型的分割准确率。Preferably, in the step (3), for the cardiac left ventricle segmentation model obtained after the training is completed, it is verified by using the verification set samples, and the model parameters are fine-tuned through the verification, so as to further improve the segmentation accuracy of the model.

本发明将深度学习的思想引入到心脏磁共振短轴图像左心室分割中,其过程主要分为训练和预测两个阶段:在训练阶段将预处理后的128×128大小的心脏磁共振图像作为输入,将人工处理好的标签作为网络的标签用于计算误差,随着训练迭代次数的增加,训练集误差和验证集误差逐渐下降;在测试阶段,将测试集中的数据输入到训练好的模型中,最终网络输出对每个像素的预测,生成分割结果。本发明从数据驱动的角度实现心脏磁共振短轴图像的分割,有效地解决了需要人工描绘轮廓的费时费力问题,可以克服传统图像分割算法的缺点,实现高精度和高鲁棒性的左心室分割。The present invention introduces the idea of deep learning into left ventricle segmentation of cardiac magnetic resonance short-axis images, and the process is mainly divided into two stages of training and prediction: in the training stage, the preprocessed cardiac magnetic resonance image of size 128×128 is used as Input, the manually processed labels are used as the labels of the network to calculate the error. As the number of training iterations increases, the training set error and verification set error gradually decrease; in the test phase, the data in the test set is input to the trained model In , the final network outputs predictions for each pixel, generating segmentation results. The present invention realizes the segmentation of cardiac magnetic resonance short-axis images from the perspective of data drive, effectively solves the time-consuming and laborious problem of manually drawing contours, can overcome the shortcomings of traditional image segmentation algorithms, and realizes high-precision and high-robust left ventricle segmentation.

附图说明Description of drawings

图1为本发明深层全卷积神经网络的结构示意图。Fig. 1 is a schematic structural diagram of a deep fully convolutional neural network of the present invention.

图2为心脏磁共振短轴电影序列影像。Figure 2 is a cardiac magnetic resonance short-axis cine sequence image.

图3为心脏左心室轮廓的标签图像序列。Figure 3 is a sequence of labeled images of the outline of the heart's left ventricle.

图4为本发明与现有分割模型FCN和UNet在测试集上的分割结果对比示意图。Fig. 4 is a schematic diagram of the comparison between the segmentation results of the present invention and the existing segmentation models FCN and UNet on the test set.

具体实施方式detailed description

为了更为明确地描述本发明,下面结合附图及具体实施方式对本发明的技术方案进行详细说明。In order to describe the present invention more clearly, the technical solution of the present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明基于深层全卷积神经网络的心脏左心室分割方法,具体实施步骤如下:The present invention is based on the heart left ventricle segmentation method of deep fully convolutional neural network, and specific implementation steps are as follows:

S1.获取受试者完整心脏磁共振图像,以及对应手工描绘的左心室内膜轮廓线。受试者完整心脏磁共振短轴图像只包含普通的电影序列影像,如图2所示,即通过磁共振仪器对受试者同时做冠、矢、轴三方向定位图,成像范围从心底及大血管根部到心尖部,我们只筛选短轴心脏图像,本实施方式总计受试者45例。S1. Obtain the complete cardiac magnetic resonance image of the subject, as well as the corresponding manually drawn left ventricular endocardial contour line. The short-axis image of the complete cardiac MRI of the subject only includes ordinary cine sequence images, as shown in Figure 2, that is, the coronal, sagittal, and axial positioning maps of the subject are simultaneously made by the magnetic resonance instrument, and the imaging range is from the bottom of the heart to From the root of the great vessel to the apex of the heart, we only screen short-axis cardiac images. In this embodiment, there are a total of 45 subjects.

S2.数据预处理。S2. Data preprocessing.

扣取图像中间128×128大小的图像区域,考虑到左心室一般都位于短轴图像的中间区域,扣取后的图像可以减少其他组织对网络的影响,另外还会对扣取后的图像做归一化处理,将该轮廓线做成神经网络可以识别的标记图,最终生成一张128×128的二值图像,其中属于轮廓线内部的像素点为目标类,轮廓线外部的像素的为背景类,并将其处理成如图3所示的标签图。The 128×128 image area in the middle of the image is deducted. Considering that the left ventricle is generally located in the middle area of the short-axis image, the subtracted image can reduce the influence of other tissues on the network. Normalize the contour line into a marker image that can be recognized by the neural network, and finally generate a 128×128 binary image, in which the pixels inside the contour line are the target class, and the pixels outside the contour line are background class, and process it into a label map as shown in Figure 3.

S3.划分数据集。S3. Divide the data set.

将得到的短轴磁共振图像数据和对应的分割标签数据作为样本数据集,并将该数据集划分为训练集、验证集和测试集,即按照大致1:1:1的比例。The obtained short-axis magnetic resonance image data and the corresponding segmentation label data are used as a sample data set, and the data set is divided into a training set, a verification set and a test set, that is, according to a ratio of roughly 1:1:1.

S4.搭建网络模型。S4. Build a network model.

按照如图1所示搭建网络,本次训练过程中使用了全卷积网络,卷积网络背后的思想是局部连接和权值共享。不同于之前的饿感知机网络中每个神经元与前一层所有神经元进行全连接,局部连接和权值共享可以减少很多参数量。每一个卷积层通过特定大小的卷积核来计算,假设输入图像

Figure BDA0001937749330000051
和卷积核
Figure BDA0001937749330000052
(其中hi≥hk,wi≥wk),那么K卷积I就是将卷积核K在矩阵I上滑动,在每一个滑动到的位置做矩阵点乘再求和的运算,最终得到特征矩阵
Figure BDA0001937749330000053
另外,有时在卷积层之后会进行池化层的操作,对结果压缩的效果,降低数据的空间尺寸,即是的计算资源耗费变少;常见的池化操作有最大池化层和平均池化层,最大池化是对局部区域取最大值,平均池化是对局部区域求取平均值。Build the network as shown in Figure 1. In this training process, a fully convolutional network is used. The idea behind the convolutional network is local connection and weight sharing. Unlike the previous perceptron network where each neuron is fully connected to all neurons in the previous layer, local connections and weight sharing can reduce a lot of parameters. Each convolutional layer is calculated by a convolution kernel of a specific size, assuming an input image
Figure BDA0001937749330000051
and convolution kernel
Figure BDA0001937749330000052
(where h i ≥ h k , w i ≥ w k ), then K convolution I is to slide the convolution kernel K on the matrix I, and do matrix point multiplication and summation at each sliding position, and finally get feature matrix
Figure BDA0001937749330000053
In addition, sometimes the pooling layer operation is performed after the convolutional layer, and the result compression effect reduces the space size of the data, that is, the computing resource consumption is reduced; common pooling operations include maximum pooling layer and average pooling In the layer, the maximum pooling is to take the maximum value of the local area, and the average pooling is to find the average value of the local area.

为了能够加深网络的深度,更好地学到图像的特征,我们以残差网络作为主干网络,残差网络利用跨层链接的思想。假定某段神经网络的输入是x,期望输出是H(x),如果是直接通过x来学习H(x),那么相对训练会比较难;残差网络是把目标转变为恒等映射的学习,将输入x传到输出中作为初始的结果,这样输出结果变为H(x)=F(x)+x,这段网络的目标不再是学习一个完整的输出,而是目标值H(x)和x之间的差值,也就是F(x)=H(x)-x。这种残差跳跃式的结构,打破了之前传统的神经网络n-1层的输出只能给n层作为输入的惯例,使某一层的输出可以直接跨过几层作为后面某一层的输入。In order to deepen the depth of the network and better learn the characteristics of the image, we use the residual network as the backbone network, and the residual network uses the idea of cross-layer links. Assuming that the input of a certain neural network is x, and the expected output is H(x), if it is to learn H(x) directly through x, then the relative training will be more difficult; the residual network is the learning that transforms the target into an identity map , pass the input x to the output as the initial result, so that the output becomes H(x)=F(x)+x, the goal of this network is no longer to learn a complete output, but the target value H( The difference between x) and x, that is, F(x)=H(x)-x. This jumping structure of residuals breaks the traditional convention that the output of the n-1 layer of the traditional neural network can only be used as an input to the n layer, so that the output of a certain layer can directly cross several layers as the output of a later layer. enter.

分割网络是对图像进行像素级的分类,与经典的卷积网络在末端使用全连接层得到固定长度的特征向量进行分类不同。分割网络需要利用反卷积对最后一个基层的特征图进行上采样,使它恢复到原输入图像的尺寸,从而完成对图像中每一个像素点的预测。考虑到只使用最后一个特征层做反卷积会存在信息丢失严重的问题,本发明加入了跳跃结构,让更前层的特征层也做反卷积然后与后层的融合后作为最后的预测结果,本发明的网络总共利用4个特征层。The segmentation network is to classify images at the pixel level, which is different from the classic convolutional network that uses a fully connected layer at the end to obtain a fixed-length feature vector for classification. The segmentation network needs to use deconvolution to upsample the feature map of the last base layer to restore it to the size of the original input image, so as to complete the prediction of each pixel in the image. Considering that only the last feature layer is used for deconvolution, there will be a serious problem of information loss. The present invention adds a skip structure, so that the feature layer of the previous layer is also deconvolved and then fused with the latter layer as the final prediction. As a result, the network of the present invention utilizes a total of 4 feature layers.

本发明的网络以磁共振短轴的心脏图像作为输入,网络最终实现基于每个像素点的分类。对于一张磁共振短轴图像进入网络之后,网络的输出是一张与输入图像同样大小的预测图,每个像素值是该像素的类别y0,y1,...,yn,...,yN,n∈[0,N],然后会经过一个softmax层,如下式(1)将输出转化为概率。The network of the present invention takes the heart image of the magnetic resonance short axis as input, and the network finally realizes the classification based on each pixel. After an MRI short-axis image enters the network, the output of the network is a prediction map of the same size as the input image, and each pixel value is the category of the pixel y 0 ,y 1 ,...,y n ,. .., y N , n∈[0,N], and then go through a softmax layer, and convert the output into a probability as shown in the following formula (1).

Figure BDA0001937749330000061
Figure BDA0001937749330000061

网络训练过程一般利用式(2)交叉损失函数来计算预测的概率图与标签之间的差距,来监督网络对参数的优化。式(2)中,G(i,j,n)是真实的标签,其中交叉熵的值越小,两个概率分布就越接近。The network training process generally uses the cross loss function of formula (2) to calculate the gap between the predicted probability map and the label to supervise the optimization of the network parameters. In formula (2), G (i, j, n) is the real label, and the smaller the value of cross entropy, the closer the two probability distributions are.

Figure BDA0001937749330000062
Figure BDA0001937749330000062

为了更好的对左心室分割,本发明提出基于式(3)的Focal交叉损失函数,该损失函数会对难分类的像素点给予更多的关注,对容易分的像素点基于较少的关注。In order to better segment the left ventricle, the present invention proposes a Focal cross loss function based on formula (3), which will give more attention to pixels that are difficult to classify, and less attention to pixels that are easy to classify .

Figure BDA0001937749330000063
Figure BDA0001937749330000063

S5.训练网络。S5. Training the network.

以训练集中的短轴心脏图像作为神经网络的输入,以该图像对应的标签图像作为真值标签,通过端到端地训练全卷积神经网络。The short-axis heart image in the training set is used as the input of the neural network, and the label image corresponding to the image is used as the ground-truth label to train the fully convolutional neural network end-to-end.

S6.模型测试。S6. Model testing.

通过学习确定最终的整个网络框架,将测试集输入到网络中,最终网络输出对每个像素点的分类结果。将预测结果与真实标签(Ground Truth)作比较,根据式(4)计算相应的Dice metric(DM)指标。Through learning to determine the final entire network framework, the test set is input into the network, and the final network outputs the classification results for each pixel. Compare the prediction result with the ground truth, and calculate the corresponding Dice metric (DM) index according to formula (4).

Figure BDA0001937749330000071
Figure BDA0001937749330000071

此外,我们使本发明与另外两个分割模型FCN和UNet做了比较,如图4所示,可以看出本发明模型在测试集上的效果是最好的。In addition, we compared the present invention with the other two segmentation models FCN and UNet, as shown in Figure 4, it can be seen that the present invention has the best effect on the test set.

上述的对实施例的描述是为便于本技术领域的普通技术人员能理解和应用本发明。熟悉本领域技术的人员显然可以容易地对上述实施例做出各种修改,并把在此说明的一般原理应用到其他实施例中而不必经过创造性的劳动。因此,本发明不限于上述实施例,本领域技术人员根据本发明的揭示,对于本发明做出的改进和修改都应该在本发明的保护范围之内。The above description of the embodiments is for those of ordinary skill in the art to understand and apply the present invention. It is obvious that those skilled in the art can easily make various modifications to the above-mentioned embodiments, and apply the general principles described here to other embodiments without creative efforts. Therefore, the present invention is not limited to the above embodiments, and improvements and modifications made by those skilled in the art according to the disclosure of the present invention should fall within the protection scope of the present invention.

Claims (7)

1.一种基于深层全卷积神经网络的心脏左心室分割方法,包括如下步骤:1. A method for segmenting the left ventricle of the heart based on a deep fully convolutional neural network, comprising the steps of: (1)获取受试者的心脏磁共振短轴图像并采用人工方式在图像中手动标记出心脏左心室的轮廓线,并构造与心脏磁共振短轴图像大小相同的二值分割图像,该二值分割图像中心脏左心室轮廓及其内部像素值均为1,轮廓外部像素值均为0;(1) Obtain the cardiac magnetic resonance short-axis image of the subject and manually mark the contour line of the left ventricle in the image, and construct a binary segmentation image with the same size as the cardiac magnetic resonance short-axis image. In the value segmentation image, the contour of the left ventricle of the heart and its internal pixel values are all 1, and the external pixel values of the contour are all 0; (2)针对不同受试者采用步骤(1)的方法获得大量样本,每一样本包括受试者的心脏磁共振短轴图像及其对应的二值分割图像;将所有样本按比例划分为训练集、验证集和测试集;(2) A large number of samples are obtained by using the method of step (1) for different subjects, each sample includes the short-axis cardiac magnetic resonance image of the subject and its corresponding binary segmentation image; all samples are divided into training set, validation set and test set; (3)利用训练集样本中的心脏磁共振短轴图像作为全卷积神经网络的输入,二值分割图像作为全卷积神经网络输出的真值标签,进而对该神经网络进行训练,最终训练完成后得到心脏左心室分割模型;(3) Use the cardiac magnetic resonance short-axis image in the training set sample as the input of the fully convolutional neural network, and the binary segmentation image as the true value label output by the fully convolutional neural network, and then train the neural network, and finally train After completion, the segmentation model of the left ventricle of the heart is obtained; (4)将测试集样本中的心脏磁共振短轴图像输入至心脏左心室分割模型,即可得到关于心脏左心室轮廓的二值分割图像,进而将该二值分割图像与测试集样本中的二值分割图像进行比对。(4) Input the cardiac magnetic resonance short-axis image in the test set sample into the heart left ventricle segmentation model, and then obtain the binary segmentation image about the outline of the heart left ventricle, and then compare the binary segmentation image with the test set sample Binary segmented images for comparison. 2.根据权利要求1所述的心脏左心室分割方法,其特征在于:所述步骤(1)中获取受试者的心脏磁共振短轴图像,即通过磁共振仪器对受试者心脏同时做冠、矢、轴三个方向定位成像,成像范围从心底及大血管根部到心尖部,并从中筛选出心脏磁共振短轴图像。2. The heart left ventricle segmentation method according to claim 1, characterized in that: in the step (1), the cardiac magnetic resonance short-axis image of the subject is obtained, that is, the subject's heart is simultaneously performed by a magnetic resonance instrument. Coronal, sagittal, and axial three-direction positioning imaging, the imaging range is from the bottom of the heart and the root of the great vessels to the apex of the heart, and the cardiac magnetic resonance short-axis images are screened out. 3.根据权利要求1所述的心脏左心室分割方法,其特征在于:所述步骤(1)中构造二值分割图像的具体操作过程为:对于手动标记出的心脏左心室轮廓线,将其表示成神经网络可识别的标记图,即一张与心脏磁共振短轴图像大小相同的二值分割图像,其中属于轮廓线及其内部的像素点为目标类且像素值均为1,轮廓线外部的像素点为背景类且像素值均为0;所述训练集、验证集和测试集中的样本数量比例为1:1:1。3. The heart left ventricle segmentation method according to claim 1, characterized in that: the specific operation process of constructing a binary segmentation image in the step (1) is: for the manually marked heart left ventricle contour line, its Expressed as a marker map recognizable by the neural network, that is, a binary segmentation image of the same size as the short-axis image of cardiac magnetic resonance, in which the pixels belonging to the contour line and its interior are the target class and the pixel values are all 1, and the contour line The external pixel points are the background class and the pixel values are all 0; the ratio of the number of samples in the training set, verification set and test set is 1:1:1. 4.根据权利要求1所述的心脏左心室分割方法,其特征在于:所述步骤(3)中的全卷积神经网络包含一个残差网络Resnet 50、四个预测层P1~P4和四个反卷积层D1~D4,所述残差网络Resnet 50从输入到输出依次由一个卷积层C、一个池化层和四个残差阶段L1~L4级联而成,卷积层C的输入即为整个神经网络的输入,残差阶段L1和残差阶段L4均由3个残差结构级联而成,残差阶段L2则由4个残差结构级联而成,残差阶段L3则由6个残差结构级联而成,所述残差结构由三个卷积层C1~C3级联而成,其中卷积层C1的输入与卷积层C3的输出叠加后作为残差结构的输出;残差阶段L4的输出与预测层P1的输入相连,预测层P1的输出反卷积层D1的输入相连,残差阶段L3的输出与预测层P2的输入相连,预测层P2的输出与反卷积层D1的输出叠加后作为反卷积层D2的输入,残差阶段L2的输出与预测层P3的输入相连,预测层P3的输出与反卷积层D2的输出叠加后作为反卷积层D3的输入,残差阶段L1的输出与预测层P4的输入相连,预测层P4的输出与反卷积层D3的输出叠加后作为反卷积层D4的输入,反卷积层D4的输出即为整个神经网络的输出;所述预测层P1~P4均包含3×3的卷积核,步长为1,填充为1,激活函数为Relu;所述反卷积层D1~D4均设置卷积核使得输入维度扩大一倍。4. The heart left ventricle segmentation method according to claim 1, characterized in that: the fully convolutional neural network in the step (3) comprises a residual network Resnet 50, four prediction layers P1~P4 and four Deconvolution layers D1-D4, the residual network Resnet 50 is sequentially composed of a convolutional layer C, a pooling layer and four residual stages L1-L4 cascaded from input to output, the convolutional layer C The input is the input of the entire neural network. Both the residual stage L1 and the residual stage L4 are composed of three residual structures cascaded, the residual stage L2 is composed of four residual structures, and the residual stage L3 It is formed by cascading six residual structures, and the residual structure is formed by cascading three convolutional layers C1 to C3, where the input of convolutional layer C1 and the output of convolutional layer C3 are superimposed as residuals The output of the structure; the output of the residual stage L4 is connected to the input of the prediction layer P1, the output of the prediction layer P1 is connected to the input of the deconvolution layer D1, the output of the residual stage L3 is connected to the input of the prediction layer P2, and the output of the prediction layer P2 The output is superimposed with the output of the deconvolution layer D1 as the input of the deconvolution layer D2, the output of the residual stage L2 is connected to the input of the prediction layer P3, and the output of the prediction layer P3 is superimposed with the output of the deconvolution layer D2 as The input of the deconvolution layer D3, the output of the residual stage L1 is connected to the input of the prediction layer P4, the output of the prediction layer P4 and the output of the deconvolution layer D3 are superimposed as the input of the deconvolution layer D4, and the deconvolution layer The output of D4 is the output of the entire neural network; the prediction layers P1-P4 all include 3×3 convolution kernels, the step size is 1, the padding is 1, and the activation function is Relu; the deconvolution layers D1-P4 D4 sets the convolution kernel to double the input dimension. 5.根据权利要求1所述的心脏左心室分割方法,其特征在于:所述步骤(3)中对全卷积神经网络进行训练的过程为:首先将训练集样本中的心脏磁共振短轴图像逐一输入至全卷积神经网络中,计算全卷积神经网络每一次输出结果与对应真值标签之间的损失函数L,以损失函数L最小为目标通过反向传播法对全卷积神经网络中的参数不断进行优化,最终训练完成后确立的全卷积神经网络即为心脏左心室分割模型。5. The heart left ventricle segmentation method according to claim 1, characterized in that: the process of training the fully convolutional neural network in the step (3) is: first the cardiac magnetic resonance short axis in the training set sample The images are input into the fully convolutional neural network one by one, and the loss function L between each output result of the full convolutional neural network and the corresponding true value label is calculated. The parameters in the network are continuously optimized, and the fully convolutional neural network established after the final training is the heart left ventricle segmentation model. 6.根据权利要求5所述的心脏左心室分割方法,其特征在于:所述损失函数L的表达式如下:6. the heart left ventricle segmentation method according to claim 5, is characterized in that: the expression of described loss function L is as follows:
Figure FDA0001937749320000021
Figure FDA0001937749320000021
其中:W为二值分割图像的宽,H为二值分割图像的高,N为类别数且N=2,P(i,j,n)为全卷积神经网络输出图像中第j行第i列像素为类别n的概率,G(i,j,n)为对应训练集样本的二值分割图像中第j行第i列像素值,类别1表示像素为心脏左心室区域,类别2表示像素为背景区域。Among them: W is the width of the binary segmentation image, H is the height of the binary segmentation image, N is the number of categories and N=2, P (i, j, n) is the jth row of the fully convolutional neural network output image The probability that the i-column pixel is the category n, G (i,j,n) is the pixel value of the j-th row and the i-column in the binary segmentation image corresponding to the training set sample, category 1 indicates that the pixel is the left ventricle area of the heart, category 2 indicates Pixels are the background area.
7.根据权利要求1所述的心脏左心室分割方法,其特征在于:所述步骤(3)中对于训练完成后得到的心脏左心室分割模型,利用验证集样本对其进行验证,通过验证对模型参数进行微调,以进一步提高模型的分割准确率。7. heart left ventricle segmentation method according to claim 1, is characterized in that: for the cardiac left ventricle segmentation model that obtains after training is finished in described step (3), utilize verification set sample to verify it, pass verification to The model parameters are fine-tuned to further improve the segmentation accuracy of the model.
CN201910012180.8A 2019-01-07 2019-01-07 Heart left ventricle segmentation method based on deep full convolution neural network Active CN109584254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910012180.8A CN109584254B (en) 2019-01-07 2019-01-07 Heart left ventricle segmentation method based on deep full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910012180.8A CN109584254B (en) 2019-01-07 2019-01-07 Heart left ventricle segmentation method based on deep full convolution neural network

Publications (2)

Publication Number Publication Date
CN109584254A CN109584254A (en) 2019-04-05
CN109584254B true CN109584254B (en) 2022-12-20

Family

ID=65915788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910012180.8A Active CN109584254B (en) 2019-01-07 2019-01-07 Heart left ventricle segmentation method based on deep full convolution neural network

Country Status (1)

Country Link
CN (1) CN109584254B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136804B (en) * 2019-04-25 2021-11-16 深圳向往之医疗科技有限公司 Myocardial mass calculation method and system and electronic equipment
CN110120051A (en) * 2019-05-10 2019-08-13 上海理工大学 A kind of right ventricle automatic division method based on deep learning
CN110136135B (en) * 2019-05-17 2021-07-06 深圳大学 Segmentation method, apparatus, device and storage medium
CN110197492A (en) * 2019-05-23 2019-09-03 山东师范大学 A kind of cardiac MRI left ventricle dividing method and system
CN110163876B (en) * 2019-05-24 2021-08-17 山东师范大学 Left ventricular segmentation method, system, device and medium based on multi-feature fusion
CN110246149A (en) * 2019-05-28 2019-09-17 西安交通大学 Indoor scene based on depth weighted full convolutional network migrates dividing method
CN110599499B (en) * 2019-08-22 2022-04-19 四川大学 MRI image heart structure segmentation method based on multipath convolutional neural network
CN110731777B (en) * 2019-09-16 2023-07-25 平安科技(深圳)有限公司 Left ventricle measurement method and device based on image recognition and computer equipment
CN112652049B (en) * 2019-10-10 2022-09-27 上海联影医疗科技股份有限公司 Heart scanning method, device, computer equipment and storage medium
CN111199674B (en) * 2020-01-21 2022-07-08 珠海赛纳三维科技有限公司 Heart model, and three-dimensional printing method and system of heart model
CN112365504B (en) * 2019-10-29 2024-11-29 杭州脉流科技有限公司 CT left ventricle segmentation method, device, equipment and storage medium
CN110853012B (en) * 2019-11-11 2022-09-06 苏州锐一仪器科技有限公司 Method, apparatus and computer storage medium for obtaining cardiac parameters
CN112926354A (en) * 2019-12-05 2021-06-08 北京超星未来科技有限公司 Deep learning-based lane line detection method and device
CN111144486B (en) * 2019-12-27 2022-06-10 电子科技大学 Keypoint detection method of cardiac MRI image based on convolutional neural network
CN111242928A (en) * 2020-01-14 2020-06-05 中国人民解放军陆军军医大学第二附属医院 Automatic segmentation, tracking and localization method of atrium based on multi-view learning
CN111466894B (en) * 2020-04-07 2023-03-31 上海深至信息科技有限公司 Ejection fraction calculation method and system based on deep learning
CN111489364B (en) * 2020-04-08 2022-05-03 重庆邮电大学 Medical image segmentation method based on lightweight full convolution neural network
CN111583207B (en) * 2020-04-28 2022-04-12 宁波智能装备研究院有限公司 Method and system for determining heart contour of zebra fish juvenile fish
CN111739000B (en) * 2020-06-16 2022-09-13 山东大学 A system and device for improving the accuracy of left ventricular segmentation in multiple cardiac views
CN111784696B (en) * 2020-06-28 2023-08-01 深圳大学 A method and system for training right ventricle segmentation model and right ventricle segmentation
CN111754534B (en) * 2020-07-01 2024-05-31 杭州脉流科技有限公司 CT left ventricle short axis image segmentation method, device, computer equipment and storage medium based on deep neural network
CN111862190B (en) * 2020-07-10 2024-04-05 北京农业生物技术研究中心 Method and device for automatically measuring area of soft rot disease spots of isolated plants
CN114073536B (en) * 2020-08-12 2025-05-02 通用电气精准医疗有限责任公司 Perfusion imaging system and method
CN112075956B (en) * 2020-09-02 2022-07-22 深圳大学 A deep learning-based ejection fraction estimation method, terminal and storage medium
CN112634193A (en) * 2020-09-30 2021-04-09 上海交通大学 Image anomaly detection method and storage medium
CN112330687B (en) * 2020-10-19 2022-10-28 肾泰网健康科技(南京)有限公司 Kidney pathological image segmentation model, method and system based on AI technology
WO2022087853A1 (en) * 2020-10-27 2022-05-05 深圳市深光粟科技有限公司 Image segmentation method and apparatus, and computer-readable storage medium
CN112766377B (en) * 2021-01-20 2021-10-08 中国人民解放军总医院 Left ventricular magnetic resonance imaging intelligent classification method, device, equipment and medium
CN112508949B (en) * 2021-02-01 2021-05-11 之江实验室 Method for automatically segmenting left ventricle of SPECT three-dimensional reconstruction image
CN113469948B (en) * 2021-06-08 2022-02-25 北京安德医智科技有限公司 Left ventricular segment identification method and device, electronic device and storage medium
CN113808143B (en) * 2021-09-06 2024-05-17 沈阳东软智能医疗科技研究院有限公司 Image segmentation method and device, readable storage medium and electronic equipment
CN113838068B (en) * 2021-09-27 2024-12-03 深圳科亚医疗科技有限公司 Automatic segmentation method, device and storage medium for myocardial segments
CN113744287B (en) * 2021-10-13 2022-08-23 推想医疗科技股份有限公司 Image processing method and device, electronic equipment and storage medium
CN114419157B (en) * 2022-01-23 2024-11-19 东南大学 A method for automatic positioning of the four-chamber heart based on deep learning
CN115049608A (en) * 2022-06-10 2022-09-13 郑州轻工业大学 Full-automatic epicardial adipose tissue extraction system based on YOLO-V5 and U-Net
WO2024098379A1 (en) * 2022-11-11 2024-05-16 深圳先进技术研究院 Fully automatic cardiac magnetic resonance imaging segmentation method based on dilated residual network
CN116385468B (en) * 2023-06-06 2023-09-01 浙江大学 A system based on image analysis software for zebrafish cardiac parameters
CN117058014B (en) * 2023-07-14 2024-03-29 北京透彻未来科技有限公司 LAB color space matching-based dyeing normalization system and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10521902B2 (en) * 2015-10-14 2019-12-31 The Regents Of The University Of California Automated segmentation of organ chambers using deep learning methods from medical imaging
CN110475505B (en) * 2017-01-27 2022-04-05 阿特瑞斯公司 Automatic segmentation using full convolution network
CN107749061A (en) * 2017-09-11 2018-03-02 天津大学 Based on improved full convolutional neural networks brain tumor image partition method and device

Also Published As

Publication number Publication date
CN109584254A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109584254B (en) Heart left ventricle segmentation method based on deep full convolution neural network
US9968257B1 (en) Volumetric quantification of cardiovascular structures from medical imaging
CN109035252B (en) A kind of super-pixel method towards medical image segmentation
US11024025B2 (en) Automatic quantification of cardiac MRI for hypertrophic cardiomyopathy
CN109598727B (en) CT image lung parenchyma three-dimensional semantic segmentation method based on deep neural network
Li et al. Dilated-inception net: multi-scale feature aggregation for cardiac right ventricle segmentation
CN109242860B (en) Brain tumor image segmentation method based on deep learning and weight space integration
CN109035263A (en) Brain tumor image automatic segmentation method based on convolutional neural networks
CN111192245A (en) A brain tumor segmentation network and segmentation method based on U-Net network
Xu et al. Convolutional-neural-network-based approach for segmentation of apical four-chamber view from fetal echocardiography
CN106296699A (en) Cerebral tumor dividing method based on deep neural network and multi-modal MRI image
CN106096632A (en) Based on degree of depth study and the ventricular function index prediction method of MRI image
CN110706225A (en) Tumor identification system based on artificial intelligence
CN105868572B (en) A kind of construction method of the myocardial ischemia position prediction model based on self-encoding encoder
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
Wang et al. SK-UNet: An improved U-Net model with selective kernel for the segmentation of LGE cardiac MR images
CN109902682A (en) A Breast X-ray Image Detection Method Based on Residual Convolutional Neural Network
CN110363772B (en) Cardiac MRI segmentation method and system based on adversarial network
CN113902738A (en) A cardiac MRI segmentation method and system
Zhuang et al. Tumor classification in automated breast ultrasound (ABUS) based on a modified extracting feature network
CN110458842B (en) Brain tumor segmentation method based on two-channel three-dimensional dense connection network
CN115147600A (en) GBM Multimodal MR Image Segmentation Method Based on Classifier Weight Converter
CN116704305A (en) Multi-modal and multi-section classification method for echocardiography based on deep learning algorithm
Senthilkumaran et al. Brain image segmentation
CN114398979A (en) Ultrasonic image thyroid nodule classification method based on feature decoupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant