[go: up one dir, main page]

CN112767407B - CT image kidney tumor segmentation method based on cascade gating 3DUnet model - Google Patents

CT image kidney tumor segmentation method based on cascade gating 3DUnet model Download PDF

Info

Publication number
CN112767407B
CN112767407B CN202110141339.3A CN202110141339A CN112767407B CN 112767407 B CN112767407 B CN 112767407B CN 202110141339 A CN202110141339 A CN 202110141339A CN 112767407 B CN112767407 B CN 112767407B
Authority
CN
China
Prior art keywords
tumor
convolutional layer
gated
dataset
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110141339.3A
Other languages
Chinese (zh)
Other versions
CN112767407A (en
Inventor
孙玉宝
吴敏
徐宏伟
刘青山
辛宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110141339.3A priority Critical patent/CN112767407B/en
Publication of CN112767407A publication Critical patent/CN112767407A/en
Application granted granted Critical
Publication of CN112767407B publication Critical patent/CN112767407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a CT image kidney tumor segmentation method based on a cascade gating 3DUnet model, which comprises the following steps: collecting image sequences containing kidneys in abdominal CT scanning, labeling the kidneys and tumors in each image sequence, generating corresponding labeling masks, and constructing a Dataset Dataset; performing P1 pretreatment operation on the Dataset, and constructing a U-shaped deep network model M1 for kidney (including tumor) segmentation; cutting the image sequence in the Dataset and the corresponding labeling mask, taking out the voxel part with only kidney (or tumor), performing P2 preprocessing operation, and constructing a depth network segmentation model M2 for tumor segmentation based on the gating convolution layer; training the models M1 and M2 respectively; kidney tumor areas were segmented by a cascade of models M1 and M2. According to the invention, a two-stage segmentation model of network cascade is combined with a gating convolution layer to construct a depth network model for tumor segmentation, so that robustness can be maintained to the shape change of cancerous kidneys, and tumors with different sizes can be effectively segmented.

Description

一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法A method for segmenting kidney tumors in CT images based on cascaded gated 3DUnet model

所属技术领域Technical field

本发明涉及计算机、软件类以及人工智能技术等领域,尤其涉及基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法。The invention relates to the fields of computer, software, artificial intelligence technology and the like, in particular to a method for segmenting kidney tumors in CT images based on a cascaded gated 3DUnet model.

背景技术Background technique

医学图像成像技术在科学技术的迅速发展中取得了巨大的进步,该技术集现代医学、物理学、电子信息和计算机技术等诸多技术于一体,已经成为医学领域不可或缺的重要手段。计算机断层扫描(Computed Tomography,简称CT)是当前广泛使用的医学成像方式,能够准确全面地描述人体器官、组织和病灶的详细特征,使得专业医师能够更直接、更清晰地对病变部位进行非侵入式观察,以便及时制定有效的治疗方案,从而提高疾病的确诊效率和治愈率。肾脏是泌尿系统的关键组成部分,担任着调节酸碱平衡和新陈代谢等重要功能,由于人们生活节奏的加快和工作压力的增大,导致各种肾脏疾病不断增加。肾肿瘤是常见的肾脏疾病,即“肾癌”,也叫肾细胞癌,起源于肾上皮的恶性肿瘤,其病理类型复杂,临床表现殊异。相比于其他医学成像方式,CT图像能够在较好地呈现和区分出肾癌的病变特性,成为医生对肾脏疾病进行初步诊断和后续跟进的重要依据。Medical image imaging technology has made great progress in the rapid development of science and technology. This technology integrates many technologies such as modern medicine, physics, electronic information and computer technology, and has become an indispensable and important means in the medical field. Computed Tomography (CT for short) is a widely used medical imaging method, which can accurately and comprehensively describe the detailed characteristics of human organs, tissues and lesions, enabling professional doctors to perform non-invasive detection of lesion parts more directly and clearly. In order to formulate an effective treatment plan in time, so as to improve the diagnosis efficiency and cure rate of the disease. The kidney is a key component of the urinary system, responsible for regulating acid-base balance and metabolism and other important functions. Due to the accelerated pace of people's life and increased work pressure, various kidney diseases continue to increase. Kidney tumor is a common kidney disease, namely "kidney cancer", also known as renal cell carcinoma, which is a malignant tumor originating from the renal epithelium. Its pathological types are complex and its clinical manifestations are different. Compared with other medical imaging methods, CT images can better present and distinguish the lesion characteristics of kidney cancer, and become an important basis for doctors to make a preliminary diagnosis and follow-up of kidney diseases.

在临床治疗中,准确分割出肾脏和病变区域对于病情诊断,功能评估和治疗决策非常重要。早期的分割工作是由有经验的医生手动勾画的,这种分割方式主观性强、效率低并且分割结果无法复现,不能很好地满足临床要求,已经无法满足当前的定量诊断需求。随着现代科学技术的发展,使得利用计算机技术实现医学图像分割成为可能,研究人员开始纷纷探索自动分割的方法。然而,准确可靠地分割出CT图像中的肾脏存在一些难点,比如:CT图像对比度低,肾脏与相邻的器官和组织之间边界模糊,个体形状存在差异,肾脏内部的水和空气容易引起噪声和空洞等。对于肾癌患者而言,由于肿瘤尺寸多样,且与正常肾脏组织像素值相近,以致边界难以区分,使得分割CT图像中的肾脏肿瘤存在诸多挑战。因此,开发用于CT图像中的肾脏肿瘤的全自动分割算法有着很实际的研究意义。In clinical treatment, accurate segmentation of kidney and diseased areas is very important for disease diagnosis, functional evaluation and treatment decision-making. The early segmentation work was manually drawn by experienced doctors. This segmentation method is highly subjective, inefficient, and the segmentation results cannot be reproduced. It cannot well meet the clinical requirements and cannot meet the current quantitative diagnosis needs. With the development of modern science and technology, it is possible to use computer technology to achieve medical image segmentation, and researchers have begun to explore automatic segmentation methods. However, there are some difficulties in accurately and reliably segmenting kidneys in CT images, such as: low contrast in CT images, blurred boundaries between kidneys and adjacent organs and tissues, individual shape differences, and water and air inside the kidneys are likely to cause noise and voids etc. For patients with renal cancer, due to the various sizes of tumors and the similar pixel values to normal renal tissue, the borders are difficult to distinguish, making segmentation of renal tumors in CT images posed many challenges. Therefore, it is of practical research significance to develop a fully automatic segmentation algorithm for renal tumors in CT images.

目前,深度学习在各个领域均取得了显著的成果,其优越性得益于卷积神经网络能够自动提取特征的能力,使得深度学习模型能够普遍适用于不同任务,并且这种优势随着相关理论的不断深入表现地越来越明显,使之迅速发展成大数据时代的主流技术。近年来,用于医学图像分割的深度学习模型逐渐出现在当前的研究之中。虽然卷积神经网络能够自动提取有效的特征,但是针对CT图像肾脏肿瘤分割问题,仍存在对于肾脏形态变化不够鲁棒,以及肿瘤分割不够准确等问题。建立图像肾脏肿瘤的精确分割模型可为临床提供定量诊断依据,辅助医生决策,具有重要的临床意义和良好的应用前景。At present, deep learning has achieved remarkable results in various fields. Its superiority benefits from the ability of convolutional neural networks to automatically extract features, making deep learning models universally applicable to different tasks. The continuous deepening of the performance is becoming more and more obvious, making it rapidly develop into the mainstream technology in the era of big data. In recent years, deep learning models for medical image segmentation have gradually emerged in current research. Although the convolutional neural network can automatically extract effective features, there are still problems such as insufficient robustness to renal morphological changes and inaccurate tumor segmentation for the segmentation of kidney tumors in CT images. Establishing an accurate segmentation model of imaged renal tumors can provide quantitative diagnostic evidence and assist doctors in decision-making, which has important clinical significance and good application prospects.

发明目的:Purpose of the invention:

本发明所要解决的技术问题在于CT图像肾脏及其肿瘤分割,提出基于级联门控三维全卷积网络3DUnet模型的肾脏肿瘤分割算法,实现CT图像序列中的肾脏肿瘤区域的准确分割。The technical problem to be solved by the present invention lies in the segmentation of kidneys and their tumors in CT images. A kidney tumor segmentation algorithm based on the cascaded gated three-dimensional full convolution network 3DUnet model is proposed to achieve accurate segmentation of kidney tumor regions in CT image sequences.

技术方案:Technical solutions:

为解决上述技术问题,本发明提供了一种基于级联门控3DUnet模型的肾脏肿瘤分割算法,其技术方案如下:In order to solve the above-mentioned technical problems, the present invention provides a kidney tumor segmentation algorithm based on the cascaded gated 3DUnet model, and its technical scheme is as follows:

一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法,包括如下具体步骤:A method for segmenting kidney tumors in CT images based on a cascaded gated 3DUnet model, comprising the following specific steps:

S101,采集腹部CT扫描中包含肾脏的图像,取出包含肾脏或肿瘤的切片图像构成图像序列,利用标注软件对每个切片图像中的肾脏及肿瘤进行标注,生成对应的标注掩码,构建数据集Dataset;S101, collect images containing kidneys in the abdominal CT scan, take out slice images containing kidneys or tumors to form an image sequence, use labeling software to label kidneys and tumors in each slice image, generate corresponding labeling masks, and construct a data set Dataset;

S102,对Dataset进行P1预处理,P1预处理后的Dataset划分为M1训练集和M1测试集,对构建的3DUnet进行训练及测试,得到肾脏分割的深度网络模型M1;S102, perform P1 preprocessing on Dataset, divide the Dataset after P1 preprocessing into M1 training set and M1 test set, train and test the constructed 3DUnet, and obtain the deep network model M1 of kidney segmentation;

S103,对Dataset进行P2预处理,P2预处理后的划分Dataset为M2训练集和M2测试集,对构建的三维门控残差全卷积网络进行训练及测试,得到肿瘤分割的深度网络分割模型M2;S103, perform P2 preprocessing on Dataset, divide Dataset after P2 preprocessing into M2 training set and M2 test set, train and test the constructed three-dimensional gated residual full convolutional network, and obtain a deep network segmentation model for tumor segmentation M2;

S104,将待分割图像序列进行P1预处理后,利用M1分割出肾脏区域,并将分割结果进行拼接;对拼接后的分割结果进行裁剪,取出只有肾脏或肿瘤的体素,进行P2预处理后,利用M2分割出肿瘤区域。S104, after performing P1 preprocessing on the image sequence to be segmented, use M1 to segment the kidney region, and stitch the segmentation results; crop the stitched segmentation results, take out only the voxels of the kidney or tumor, and perform P2 preprocessing , use M2 to segment out the tumor area.

进一步,S101中利用ITK-SNAP医学图像标注软件对切片图像中的肾脏及肿瘤进行标注,生成对应的标注掩码,由切片图像及其对应的标注掩码构成Dataset。Further, in S101, the ITK-SNAP medical image annotation software is used to annotate the kidneys and tumors in the slice images to generate corresponding annotation masks, and a Dataset is formed from the slice images and their corresponding annotation masks.

进一步,S102中对Dataset进行P1预处理,具体包括:将Dataset中所有切片图像及其对应的标注掩码插值成相同分辨率体素间距;对插值后的每个切片图像及其对应的标注掩码进行随机裁剪,获取体素小块,并对体素小块进行归一化处理;插值后的分辨率低于插值前。Further, performing P1 preprocessing on the Dataset in S102 specifically includes: interpolating all slice images and their corresponding label masks in the Dataset into the same resolution voxel spacing; The code is randomly cropped to obtain small voxel blocks, and the voxel small blocks are normalized; the resolution after interpolation is lower than that before interpolation.

进一步,依据体素小块的尺寸构建3DUnet,用于获得肾脏区域的分割掩码。Further, 3DUnet is constructed according to the size of the voxel block, which is used to obtain the segmentation mask of the kidney region.

进一步,S103中对Dataset进行P2预处理,具体包括:将Dataset中所有切片图像及其对应的标注掩码插值成相同的分辨率体素间距,对插值后的标注掩码利用边缘检测获取肿瘤边界,对插值后的每个切片图像及其对应的标注掩码和肿瘤边界进行随机裁剪,获取体素小块,并对体素小块进行归一化处理;插值后的分辨率高于插值前。Further, in S103, the Dataset is subjected to P2 preprocessing, which specifically includes: interpolating all the slice images in the Dataset and their corresponding labeling masks into the same resolution voxel spacing, and using edge detection to obtain the tumor boundary on the interpolated labeling masks , each slice image after interpolation and its corresponding annotation mask and tumor boundary are randomly cropped to obtain small voxel blocks, and the voxel small blocks are normalized; the resolution after interpolation is higher than that before interpolation .

进一步,依据体素小块的尺寸构建三维门控残差全卷积网络,三维门控残差全卷积网络包括主干网络和肿瘤形状分支网络;主干网络为3DUnet,其中的编码器和解码器对应相同分辨率的特征图之间进行跳跃连接;肿瘤形状分支网络包括三级级联的门控卷积层、3x3x3卷积层、三线性插值和1x1x1卷积层,解码器的第一级反卷积层的输出经过1x1x1卷积层、3x3x3卷积层、三线性插值后作为第一级门控卷积层的输入,解码器的第二级反卷积层的输出经过1x1x1卷积层作后作为第一级门控卷积层的输入,第一级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第二级门控卷积层的输入,解码器的第三级反卷积层的输出经过1x1x1卷积层后作为第二级门控卷积层的输入,第二级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第三级门控卷积层的输入,解码器的第四级反卷积层的输出经过1x1x1卷积层后作为第三级门控卷积层的输入,第三级门控卷积层的输出经过1x1x1卷积层后与解码器的输出一起作为一个全连接层的输入,全连接层的输出依次经过1x1x1卷积层和一个Softmax分类器后输出概率图谱;第三级门控卷积层的输出经过1x1x1卷积层后作为sigmoid函数的输入,sigmoid函数输出最终预测掩码。Further, a three-dimensional gated residual full convolutional network is constructed according to the size of the voxel block. The three-dimensional gated residual full convolutional network includes a backbone network and a tumor shape branch network; the backbone network is 3DUnet, and the encoder and decoder Jump connections are made between feature maps corresponding to the same resolution; the tumor shape branch network includes a three-level cascade of gated convolutional layers, 3x3x3 convolutional layers, trilinear interpolation, and 1x1x1 convolutional layers. The output of the convolutional layer passes through the 1x1x1 convolutional layer, 3x3x3 convolutional layer, and trilinear interpolation as the input of the first-level gated convolutional layer, and the output of the second-level deconvolutional layer of the decoder passes through the 1x1x1 convolutional layer. Finally, it is used as the input of the first-level gated convolutional layer. The output of the first-level gated convolutional layer is used as the input of the second-level gated convolutional layer after passing through the 3x3x3 convolutional layer and trilinear interpolation. The third-level gated convolutional layer of the decoder The output of the first-level deconvolution layer passes through the 1x1x1 convolutional layer and is used as the input of the second-level gated convolutional layer, and the output of the second-level gated convolutional layer is used as the third-level gate after passing through the 3x3x3 convolutional layer and trilinear interpolation The input of the controlled convolutional layer, the output of the fourth-level deconvolution layer of the decoder passes through the 1x1x1 convolutional layer and is used as the input of the third-level gated convolutional layer, and the output of the third-level gated convolutional layer passes through the 1x1x1 convolutional layer After the product layer, together with the output of the decoder, it is used as the input of a fully connected layer. The output of the fully connected layer passes through a 1x1x1 convolutional layer and a Softmax classifier in turn to output a probability map; the output of the third-level gated convolutional layer passes through a 1x1x1 After the convolutional layer, it is used as the input of the sigmoid function, and the sigmoid function outputs the final prediction mask.

本发明相比现有技术具有如下优点:Compared with the prior art, the present invention has the following advantages:

提出了基于网络级联的两阶段的分割模型,采用随机裁剪的策略,在一定程度上减轻了类别不平衡和小目标的影响。针对肿瘤边界难以区分的问题,以3DUnet为主干网络,基于门控卷积层构建了肿瘤形状分支网络,能够预测出肿瘤的边界,从而改善了肿瘤分割性能。A two-stage segmentation model based on network cascading is proposed, and a random cropping strategy is adopted to alleviate the impact of category imbalance and small objects to a certain extent. Aiming at the problem that the tumor boundary is difficult to distinguish, 3DUnet is used as the backbone network, and a tumor shape branch network is constructed based on the gated convolutional layer, which can predict the tumor boundary, thereby improving the tumor segmentation performance.

附图说明Description of drawings

图1是本发明方法的控制流程图;Fig. 1 is the control flowchart of the inventive method;

图2是本发明方法的预测操作示意图;Fig. 2 is the predictive operation schematic diagram of the inventive method;

图3是本发明中M1模型结构图;Fig. 3 is M1 model structural diagram among the present invention;

图4是本发明中M2模型结构图。Fig. 4 is a structural diagram of the M2 model in the present invention.

具体实施方式Detailed ways

为了使本发明所需要解决的技术问题、技术方案及有益效果更加清楚、明白,以下结合附图和实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the technical problems, technical solutions and beneficial effects to be solved by the present invention clearer and clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

本实施例采用本发明提供的一种基于级联门控3DUnet模型的肾脏肿瘤分割算法,对CT图像中肾脏肿瘤进行分割,如图1和2所示,包括如下具体步骤:In this embodiment, a renal tumor segmentation algorithm based on the cascaded gated 3DUnet model provided by the present invention is used to segment the renal tumor in the CT image, as shown in Figures 1 and 2, including the following specific steps:

S101,采集腹部CT扫描中包含肾脏的图像,,取出包含肾脏或肿瘤的切片图像构成图像序列,利用标注软件对每个切片图像中的肾脏及肿瘤进行标注,生成对应的标注掩码,构建数据集Dataset。S101, collect images containing kidneys in the abdominal CT scan, take out slice images containing kidneys or tumors to form an image sequence, use labeling software to label kidneys and tumors in each slice image, generate corresponding labeling masks, and construct data Set Dataset.

本实施例数据来自KiTS19比赛数据集,原始数据均为平扫CT序列数据,图像及对应的人工标注掩码以匿名的NIFTI格式提供,其形状为(切片数,高度,宽度)。其中,切片数对应于轴向视图,并且随着切片索引的从上到下增加,在所有患者在图像采集过程中都是仰卧的。当病人有多个扫描图像时,选取切片厚度最小的那一例,数据集的切片厚度为1mm至5mm。The data in this example comes from the KiTS19 competition data set. The original data are plain scan CT sequence data. The image and the corresponding manual annotation mask are provided in anonymous NIFTI format, and its shape is (number of slices, height, width). where the slice number corresponds to the axial view and increases with slice index from top to bottom in all patients supine during image acquisition. When a patient has multiple scan images, the case with the smallest slice thickness is selected, and the slice thickness of the data set is 1mm to 5mm.

S102,对Dataset数据集进行P1预处理操作,划分为训练集和测试集,基于3DUnet构建肾脏分割的深度网络模型M1(如图3所示)。S102, perform P1 preprocessing operation on the Dataset dataset, divide it into a training set and a test set, and construct a deep network model M1 for kidney segmentation based on 3DUnet (as shown in FIG. 3 ).

进行P1预处理操作:将所有样例的序列图像利用三线性插值重新采样为3.22×1.62×1.62mm(最低分辨率)的体素间距作为标准间距,将所有样例的人工标注掩码利用最近邻插值采样到相同的体素间距,再通过随机裁剪(50%的重叠率,即步长为(40,80,80))的方法取出每个样例中尺寸为80×160×160(中值)的体素小块,将每个样例的序列图像的CT值都限制在[-79,304](所有像素范围的5%~95%)的范围内,以剔除某些物质造成的异常强度值,由于网络中权重初始化的性质,使用z-score归一化处理:对每个像素值减去均值并除以方差,使得图像的像素处在更容易被CNN处理的范围内。依据体素小块的尺寸设计3DUnet,包括编码器和解码器。编码器的参数设置如表1所示,解码器采用与之对称的结构。Perform P1 preprocessing operation: use trilinear interpolation to resample the sequence images of all samples to a voxel spacing of 3.22×1.62×1.62mm (lowest resolution) as the standard spacing, and use the nearest Neighboring interpolation samples to the same voxel spacing, and then randomly crops (50% overlap rate, that is, the step size is (40,80,80)) to take out the size of each sample with a size of 80×160×160 (middle value), the CT value of each sample sequence image is limited to the range of [-79,304] (5% to 95% of the range of all pixels), in order to eliminate the abnormal intensity caused by some substances Value, due to the nature of weight initialization in the network, use z-score normalization: subtract the mean from each pixel value and divide by the variance, so that the pixels of the image are in a range that is easier to be processed by CNN. Design 3DUnet according to the size of voxel blocks, including encoder and decoder. The parameter settings of the encoder are shown in Table 1, and the decoder adopts a symmetrical structure.

表1编码器的参数设置Table 1 Encoder parameter settings

类型type 尺寸/步长size/step 输出尺寸output size 卷积层(x2)Convolution layer (x2) 3x3x3/13x3x3/1 80x160x160x3080x160x160x30 最大池化层max pooling layer 1x2x2/1(2,2)1x2x2/1(2,2) 80x80x80x3080x80x80x30 卷积层(x2)Convolution layer (x2) 3x3x3/13x3x3/1 80x80x80x6080x80x80x60 最大池化层max pooling layer 2x2x2/22x2x2/2 40x40x40x6040x40x40x60 卷积层(x2)Convolution layer (x2) 3x3x3/13x3x3/1 40x40x40x12040x40x40x120 最大池化层max pooling layer 2x2x2/22x2x2/2 20x20x20x12020x20x20x120 卷积层(x2)Convolution layer (x2) 3x3x3/13x3x3/1 20x20x20x24020x20x20x240 最大池化层max pooling layer 2x2x2/22x2x2/2 10x10x10x24010x10x10x240 卷积层(x2)Convolution layer (x2) 3x3x3/13x3x3/1 10x10x10x32010x10x10x320 最大池化层max pooling layer 2x2x2/22x2x2/2 5x5x5x3205x5x5x320

S103,对Dataset数据集中的图像序列与对应的标注掩码进行裁剪,取出只有肾脏(或肿瘤)的体素部分,进行P2预处理操作,将数据划分为训练集和测试集,基于三维门控残差全卷积网络构建肿瘤分割的深度网络分割模型M2。S103, crop the image sequence in the Dataset data set and the corresponding label mask, take out the voxel part of only the kidney (or tumor), perform P2 preprocessing operation, divide the data into training set and test set, based on three-dimensional gating The residual fully convolutional network constructs a deep network segmentation model M2 for tumor segmentation.

进行P2预处理操作:将所有样例、人工标注掩码和肿瘤边界掩码采用相同的方法重新采样为3×0.78×0.78mm(中值)的体素间距作为标准,依据人工标注的信息取出包含肾脏(或肿瘤)的VOI区域,重置除了肾脏(或肿瘤)之外的像素的值为0,再通过随机裁剪(50%的重叠率,步长为(24,64,64))的方法取出每个VOI区域中尺寸为48×128×128(中值)的体素小块,将每个样例的序列图像的CT值都限制在[-79,304](所有像素范围的5%~95%)的范围内,以剔除某些物质造成的异常强度值,由于网络中权重初始化的性质,使用z-score归一化处理:对每个像素值减去均值并除以方差。以体素小块作为网络输入设计U型全卷积神经网络,通过级联门控卷积层、1x1x1卷积层、三线性插值和1x1x1卷积层构成肿瘤形状分支网络。Perform P2 preprocessing operation: resample all samples, manual labeling masks and tumor border masks in the same way to a voxel spacing of 3×0.78×0.78mm (median) as a standard, and take out according to the manually labeled information For the VOI area containing the kidney (or tumor), reset the value of pixels other than the kidney (or tumor) to 0, and then randomly crop (50% overlap rate, step size (24,64,64)) The method takes out a small voxel block with a size of 48×128×128 (median value) in each VOI area, and limits the CT value of the sequence image of each sample to [-79,304] (5% of all pixel ranges to 95%) to remove abnormal intensity values caused by certain substances, due to the nature of weight initialization in the network, use z-score normalization: subtract the mean from each pixel value and divide by the variance. A U-shaped fully convolutional neural network was designed with small voxel blocks as network input, and a tumor shape branch network was formed by cascading gated convolutional layers, 1x1x1 convolutional layers, trilinear interpolation, and 1x1x1 convolutional layers.

如图4所示,三维门控残差全卷积网络包括主干网络和肿瘤形状分支网络;主干网络为3DUnet,其中的编码器和解码器对应相同分辨率的特征图之间进行跳跃连接,然后经过1x1x1卷积进行通道数降维,最后经过一个Softmax分类器输出概率图谱。肿瘤形状分支网络包括三级级联的门控卷积层、3x3x3卷积层、三线性插值和1x1x1卷积层,解码器的第一级反卷积层的输出经过1x1x1卷积层、3x3x3卷积层、三线性插值后作为第一级门控卷积层的输入,解码器的第二级反卷积层的输出经过1x1x1卷积层作后作为第一级门控卷积层的输入,第一级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第二级门控卷积层的输入,解码器的第三级反卷积层的输出经过1x1x1卷积层后作为第二级门控卷积层的输入,第二级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第三级门控卷积层的输入,解码器的第四级反卷积层的输出经过1x1x1卷积层后作为第三级门控卷积层的输入,第三级门控卷积层的输出经过1x1x1卷积层后与解码器的输出一起作为一个全连接层的输入,全连接层的输出依次经过1x1x1卷积层和一个Softmax分类器后输出概率图谱;第三级门控卷积层的输出经过1x1x1卷积层后作为sigmoid函数的输入,sigmoid函数输出最终预测掩码。主干网络编码器结构仍采用表1的设置。其中3x3x3卷积层的作用是提取边界特征,三线性插值的作用是调节特征图尺寸,1x1x1卷积的作用是通道数降维。As shown in Figure 4, the 3D gated residual full convolutional network includes a backbone network and a tumor shape branch network; the backbone network is 3DUnet, in which the encoder and decoder correspond to feature maps of the same resolution for skip connections, and then After 1x1x1 convolution, the number of channels is reduced, and finally the probability map is output through a Softmax classifier. The tumor shape branch network includes a three-level cascade of gated convolutional layers, 3x3x3 convolutional layers, trilinear interpolation, and 1x1x1 convolutional layers. The output of the first-level deconvolutional layer of the decoder passes through 1x1x1 convolutional layers, 3x3x3 convolutional layers The product layer and trilinear interpolation are used as the input of the first-level gated convolutional layer, and the output of the second-level deconvolution layer of the decoder is used as the input of the first-level gated convolutional layer after being processed by the 1x1x1 convolutional layer. The output of the first-level gated convolutional layer passes through the 3x3x3 convolutional layer and trilinear interpolation as the input of the second-level gated convolutional layer, and the output of the third-level deconvolutional layer of the decoder passes through the 1x1x1 convolutional layer. As the input of the second-level gated convolutional layer, the output of the second-level gated convolutional layer passes through the 3x3x3 convolutional layer and trilinear interpolation as the input of the third-level gated convolutional layer, and the fourth-level of the decoder The output of the deconvolution layer passes through the 1x1x1 convolution layer as the input of the third-level gated convolution layer, and the output of the third-level gated convolution layer passes through the 1x1x1 convolution layer and is used as a full connection with the output of the decoder. The input of the layer, the output of the fully connected layer passes through the 1x1x1 convolutional layer and a Softmax classifier in turn to output the probability map; the output of the third-level gated convolutional layer passes through the 1x1x1 convolutional layer as the input of the sigmoid function, and the sigmoid function outputs Final prediction mask. The encoder structure of the backbone network still adopts the settings in Table 1. Among them, the role of the 3x3x3 convolutional layer is to extract boundary features, the role of trilinear interpolation is to adjust the size of the feature map, and the role of 1x1x1 convolution is to reduce the dimensionality of the number of channels.

三维门控残差全卷积网络对输入CT图像依次经过主干网络的编码器和解码器包括的下采样和上采样部分,用来提取特征信息,然后经过肿瘤形状分支网络提取肿瘤区域形状,将形状分支的输出边界图表示为s∈RH×W×C,主干网络输出特征图为z∈RH×W×C,将形状分支网络的特征图与主干网络输出的特征图进行串联操作,最后通过1x1x1卷积和soft-max输出最终预测掩码。本专利通过融合主干网络的语义特征和形状分支网络的边界特征,能够产生更精确的分割结果。The three-dimensional gated residual full convolutional network sequentially passes through the downsampling and upsampling parts of the encoder and decoder of the backbone network for the input CT image to extract feature information, and then extracts the shape of the tumor area through the tumor shape branch network. The output boundary map of the shape branch is expressed as s∈R H×W×C , and the output feature map of the backbone network is z∈R H×W×C . The feature map of the shape branch network and the feature map output by the backbone network are connected in series, Finally, the final prediction mask is output by 1x1x1 convolution and soft-max. This patent can generate more accurate segmentation results by fusing the semantic features of the backbone network and the boundary features of the shape branch network.

S104,选择合适的优化学习方法,设置相关的超参数,分别对模型M1、M2进行训练。S104. Select an appropriate optimization learning method, set relevant hyperparameters, and train the models M1 and M2 respectively.

分别对模型M1利用S102构建的数据集进行训练,对模型M2利用S103构建的数据集进行训练。使用Adam优化器进行损失优化,每次训练结束取验证集上最好的平均指标为该模型的最优结果。采用如下超参数设置:batchsize设定为2,迭代300个batch为一个epoch,总共迭代150个epoch;初始学习率设定为10-3,在训练第80和120个epoch时自动缩小0.1倍;动量设定为0.95,权重衰减系数恒定为10-4The data set constructed by using S102 for the model M1 is trained, and the data set constructed for the model M2 by S103 is trained. The Adam optimizer is used for loss optimization, and the best average index on the verification set is taken as the optimal result of the model at the end of each training. The following hyperparameter settings are adopted: batchsize is set to 2, and 300 batches are iterated as one epoch, and a total of 150 epochs are iterated; the initial learning rate is set to 10 -3 , and it is automatically reduced by 0.1 times when training the 80th and 120th epochs; The momentum is set to 0.95, and the weight decay coefficient is constant to 10 -4 .

S105,从测试集中任选CT图像序列,进行预处理操作后,通过模型M1和M2分割出肾脏肿瘤区域。S105 , selecting a CT image sequence from the test set, performing a preprocessing operation, and segmenting a kidney tumor region by using the models M1 and M2 .

在测试集中随机选取一个肾癌患者病例CT平扫影像,选取其中包含肾脏或者是肿瘤的CT切片图像,经过P1预处理操作后利用训练好的M1模型进行预测,再将结果按照随机裁剪的逆顺序进行拼接,依据拼接好的结果裁剪出只包含肾脏或肿瘤的体素小块,经过P2预处理操作后利用训练好的M2模型进行预测,再将分割结果按照随机裁剪的逆顺序进行拼接,形成最终的分割结果。Randomly select a plain CT scan image of a patient with kidney cancer in the test set, select a CT slice image containing the kidney or tumor, and use the trained M1 model to predict after the P1 preprocessing operation, and then use the randomly cropped inverse Sequential splicing is performed, and small voxel blocks containing only kidneys or tumors are cut out according to the spliced results. After the P2 preprocessing operation, the trained M2 model is used for prediction, and then the segmentation results are spliced in the reverse order of random cropping. form the final segmentation result.

表2不同深度网络模型的评价指标值(%)Table 2 Evaluation index values of different deep network models (%)

Figure BDA0002928792860000061
Figure BDA0002928792860000061

表2显示了不同深度网络模型的评价指标,其中二维全卷积网络通过对每张切片图像进行分割后的结果叠加组成,3DUnet、V-net采用原始模型的结构。从中可以看出:专利的算法对于肾脏和肿瘤的分割精度均高于其他对比算法,特别是针对肿瘤目标,由于增加了肿瘤门控形状分支网络,肿瘤分割的定量指标得到显著提升。Table 2 shows the evaluation indicators of different deep network models, in which the two-dimensional fully convolutional network is formed by superimposing the results of segmenting each slice image, and 3DUnet and V-net adopt the structure of the original model. It can be seen that the patented algorithm has higher segmentation accuracy for kidney and tumor than other comparison algorithms, especially for tumor targets. Due to the addition of tumor-gated shape branch networks, the quantitative indicators of tumor segmentation have been significantly improved.

以上所述,仅为本发明中的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉该技术的人在本发明所揭露的技术范围内,可理解想到的变换或替换,都应涵盖在本发明的包含范围之内,因此,本发明的保护范围应该以权利要求书的保护范围为准。The above is only a specific implementation mode in the present invention, but the scope of protection of the present invention is not limited thereto. Anyone familiar with the technology can understand the conceivable transformation or replacement within the technical scope disclosed in the present invention. All should be covered within the scope of the present invention, therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (4)

1.一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法,其特征在于,包括如下具体步骤:1. a kind of CT image renal tumor segmentation method based on cascade gating 3DUnet model, is characterized in that, comprises following concrete steps: S101,采集腹部CT扫描中包含肾脏的图像,取出包含肾脏或肿瘤的切片图像构成图像序列,利用标注软件对每个切片图像中的肾脏及肿瘤进行标注,生成对应的标注掩码,构建数据集Dataset;S101, collect images containing kidneys in the abdominal CT scan, take out slice images containing kidneys or tumors to form an image sequence, use labeling software to label kidneys and tumors in each slice image, generate corresponding labeling masks, and construct a data set Dataset; S102,对Dataset进行P1预处理,P1预处理后的Dataset划分为M1训练集和M1测试集,对构建的三维全卷积网络3DUnet进行训练及测试,得到肾脏分割的深度网络模型M1;S102, perform P1 preprocessing on the Dataset, divide the Dataset after P1 preprocessing into M1 training set and M1 test set, train and test the constructed three-dimensional full convolution network 3DUnet, and obtain the deep network model M1 of kidney segmentation; S103,对Dataset进行P2预处理,P2预处理后的划分Dataset为M2训练集和M2测试集,对构建的三维门控残差全卷积网络进行训练及测试,得到肿瘤分割的深度网络分割模型M2;S103, perform P2 preprocessing on Dataset, divide Dataset after P2 preprocessing into M2 training set and M2 test set, train and test the constructed three-dimensional gated residual full convolutional network, and obtain a deep network segmentation model for tumor segmentation M2; S104,将待分割图像序列进行P1预处理后,利用M1分割出肾脏区域,并将分割结果进行拼接;对拼接后的分割结果进行裁剪,取出只有肾脏或肿瘤的体素,进行P2预处理后,利用M2分割出肿瘤区域;S104, after performing P1 preprocessing on the image sequence to be segmented, use M1 to segment the kidney region, and stitch the segmentation results; crop the stitched segmentation results, take out only the voxels of the kidney or tumor, and perform P2 preprocessing , use M2 to segment the tumor area; S102中对Dataset进行P1预处理,具体包括:将Dataset中所有切片图像及其对应的标注掩码插值成相同分辨率体素间距;对插值后的每个切片图像及其对应的标注掩码进行随机裁剪,获取体素小块,并对体素小块进行归一化处理;插值后的分辨率低于插值前;Perform P1 preprocessing on the Dataset in S102, which specifically includes: interpolating all slice images and their corresponding label masks in the Dataset into the same resolution voxel spacing; Random cropping, obtaining small voxel blocks, and normalizing the small voxel blocks; the resolution after interpolation is lower than that before interpolation; S103中对Dataset进行P2预处理,具体包括:将Dataset中所有切片图像及其对应的标注掩码插值成相同的分辨率体素间距,对插值后的标注掩码利用边缘检测获取肿瘤边界,对插值后的每个切片图像及其对应的标注掩码和肿瘤边界进行随机裁剪,获取体素小块,并对体素小块进行归一化处理;插值后的分辨率高于插值前;In S103, P2 preprocessing is performed on the Dataset, which specifically includes: interpolating all slice images in the Dataset and their corresponding labeling masks into the same resolution voxel spacing, using edge detection to obtain the tumor boundary for the interpolated labeling masks, and After interpolation, each slice image and its corresponding annotation mask and tumor boundary are randomly cropped to obtain small voxel blocks, and the voxel small blocks are normalized; the resolution after interpolation is higher than that before interpolation; 三维门控残差全卷积网络包括主干网络和肿瘤形状分支网络;主干网络为3DUnet,肿瘤形状分支网络包括三级级联的门控卷积层。The three-dimensional gated residual fully convolutional network includes a backbone network and a tumor shape branch network; the backbone network is 3DUnet, and the tumor shape branch network includes a three-level cascade of gated convolutional layers. 2.如权利要求1所述的一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法,其特征在于,S101中利用ITK-SNAP医学图像标注软件对切片图像中的肾脏及肿瘤进行标注,生成对应的标注掩码,由切片图像及其对应的标注掩码构成Dataset。2. A kind of CT image kidney tumor segmentation method based on the cascaded gated 3DUnet model as claimed in claim 1, it is characterized in that, utilize ITK-SNAP medical image labeling software to label the kidney and tumor in the slice image in S101 , to generate the corresponding label mask, and the Dataset is composed of sliced images and their corresponding label masks. 3.如权利要求1所述的一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法,其特征在于,依据体素小块的尺寸构建3DUnet,用于获得肾脏区域的分割掩码。3. A kind of CT image renal tumor segmentation method based on cascading gated 3DUnet model as claimed in claim 1, is characterized in that, constructs 3DUnet according to the size of voxel block, is used to obtain the segmentation mask of kidney region. 4.如权利要求1所述的一种基于级联门控3DUnet模型的CT图像肾脏肿瘤分割方法,其特征在于,依据体素小块的尺寸构建三维门控残差全卷积网络,主干网络中的编码器和解码器对应相同分辨率的特征图之间进行跳跃连接;肿瘤形状分支网络还包括3x3x3卷积层、三线性插值和1x1x1卷积层,解码器的第一级反卷积层的输出经过1x1x1卷积层、3x3x3卷积层、三线性插值后作为第一级门控卷积层的输入,解码器的第二级反卷积层的输出经过1x1x1卷积层作后作为第一级门控卷积层的输入,第一级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第二级门控卷积层的输入,解码器的第三级反卷积层的输出经过1x1x1卷积层后作为第二级门控卷积层的输入,第二级门控卷积层的输出经过3x3x3卷积层、三线性插值后作为第三级门控卷积层的输入,解码器的第四级反卷积层的输出经过1x1x1卷积层后作为第三级门控卷积层的输入,第三级门控卷积层的输出经过1x1x1卷积层后与解码器的输出一起作为一个全连接层的输入,全连接层的输出依次经过1x1x1卷积层和一个Softmax分类器后输出概率图谱;第三级门控卷积层的输出经过1x1x1卷积层后作为sigmoid函数的输入,sigmoid函数输出最终预测掩码。4. A kind of CT image renal tumor segmentation method based on the cascaded gated 3DUnet model as claimed in claim 1, it is characterized in that, according to the size of voxel small piece, construct three-dimensional gated residual full convolutional network, backbone network The encoder and decoder in the corresponding feature maps of the same resolution perform skip connections; the tumor shape branch network also includes 3x3x3 convolutional layers, trilinear interpolation and 1x1x1 convolutional layers, and the first-level deconvolution layer of the decoder The output of the 1x1x1 convolutional layer, 3x3x3 convolutional layer, and trilinear interpolation are used as the input of the first-level gated convolutional layer, and the output of the second-level deconvolution layer of the decoder is processed by the 1x1x1 convolutional layer. The input of the first-level gated convolutional layer, the output of the first-level gated convolutional layer passes through the 3x3x3 convolutional layer and trilinear interpolation as the input of the second-level gated convolutional layer, and the third-level deconvolution of the decoder The output of the product layer passes through the 1x1x1 convolutional layer and is used as the input of the second-level gated convolutional layer, and the output of the second-level gated convolutional layer is used as the third-level gated convolution after passing through the 3x3x3 convolutional layer and trilinear interpolation Layer input, the output of the fourth-level deconvolution layer of the decoder passes through the 1x1x1 convolutional layer as the input of the third-level gated convolutional layer, and the output of the third-level gated convolutional layer passes through the 1x1x1 convolutional layer Together with the output of the decoder, it is used as the input of a fully connected layer. The output of the fully connected layer passes through a 1x1x1 convolutional layer and a Softmax classifier in turn to output a probability map; the output of the third-level gated convolutional layer passes through a 1x1x1 convolutional layer. Finally, as the input of the sigmoid function, the sigmoid function outputs the final prediction mask.
CN202110141339.3A 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model Active CN112767407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110141339.3A CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Publications (2)

Publication Number Publication Date
CN112767407A CN112767407A (en) 2021-05-07
CN112767407B true CN112767407B (en) 2023-07-07

Family

ID=75704619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110141339.3A Active CN112767407B (en) 2021-02-02 2021-02-02 CT image kidney tumor segmentation method based on cascade gating 3DUnet model

Country Status (1)

Country Link
CN (1) CN112767407B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469941B (en) * 2021-05-27 2022-11-08 武汉楚精灵医疗科技有限公司 Method for measuring width of bile-pancreatic duct in ultrasonic bile-pancreatic duct examination
CN113436204A (en) * 2021-06-10 2021-09-24 中国地质大学(武汉) High-resolution remote sensing image weak supervision building extraction method
CN113436173B (en) * 2021-06-30 2023-06-27 陕西大智慧医疗科技股份有限公司 Abdominal multi-organ segmentation modeling and segmentation method and system based on edge perception
CN117237394B (en) * 2023-11-07 2024-02-27 万里云医疗信息科技(北京)有限公司 Multi-attention-based lightweight image segmentation method, device and storage medium
CN117876370B (en) * 2024-03-11 2024-06-07 南京信息工程大学 CT image kidney tumor segmentation system based on three-dimensional axial transducer model

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 A Liver Tumor Segmentation Method Based on Deep Learning in CT Images
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 An automatic segmentation method for 3D medical images based on deep learning
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 A dual-attention-based approach to CT kidney region cascade segmentation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020097100A1 (en) * 2018-11-05 2020-05-14 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
CN109598728B (en) * 2018-11-30 2019-12-27 腾讯科技(深圳)有限公司 Image segmentation method, image segmentation device, diagnostic system, and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035197A (en) * 2018-05-31 2018-12-18 东南大学 CT contrastographic picture tumor of kidney dividing method and system based on Three dimensional convolution neural network
WO2020077202A1 (en) * 2018-10-12 2020-04-16 The Medical College Of Wisconsin, Inc. Medical image segmentation using deep learning models trained with random dropout and/or standardized inputs
CN109829918A (en) * 2019-01-02 2019-05-31 安徽工程大学 A kind of liver image dividing method based on dense feature pyramid network
US10482603B1 (en) * 2019-06-25 2019-11-19 Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN110570432A (en) * 2019-08-23 2019-12-13 北京工业大学 A Liver Tumor Segmentation Method Based on Deep Learning in CT Images
CN110599500A (en) * 2019-09-03 2019-12-20 南京邮电大学 Tumor region segmentation method and system of liver CT image based on cascaded full convolution network
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN110689543A (en) * 2019-09-19 2020-01-14 天津大学 Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN111354002A (en) * 2020-02-07 2020-06-30 天津大学 Kidney and kidney tumor segmentation method based on deep neural network
CN111311592A (en) * 2020-03-13 2020-06-19 中南大学 An automatic segmentation method for 3D medical images based on deep learning
CN111563897A (en) * 2020-04-13 2020-08-21 北京理工大学 Breast nuclear magnetic image tumor segmentation method and device based on weak supervised learning
CN111627024A (en) * 2020-05-14 2020-09-04 辽宁工程技术大学 U-net improved kidney tumor segmentation method
CN111627019A (en) * 2020-06-03 2020-09-04 西安理工大学 Liver tumor segmentation method and system based on convolutional neural network
CN111798462A (en) * 2020-06-30 2020-10-20 电子科技大学 Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112085743A (en) * 2020-09-04 2020-12-15 厦门大学 Image segmentation method for renal tumor
CN112258526A (en) * 2020-10-30 2021-01-22 南京信息工程大学 A dual-attention-based approach to CT kidney region cascade segmentation

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Cascaded atrous dual attention U-Net for tumor segmentation;Yu-Cheng Liu 等;《Multimedia Tools and Applications》;第80卷(第2021期);30007-30031 *
一种基于级联卷积网络的三维脑肿瘤精细分割;褚晶辉 等;《激光与光电子学进展》;第56卷(第10期);75-84 *
基于新型深度全卷积网络的肝脏CT影像三维区域自动分割;孙明建 等;《中国生物医学工程学报》;第37卷(第04期);385-393 *
基于残差双注意力U-Net模型的CT图像囊肿肾脏自动分割;徐宏伟 等;《计算机应用研究》;第37卷(第07期);2237-2240 *
基于深度学习的器官自动分割研究进展;郭雯 等;《医疗卫生装备》;第41卷(第01期);85-94 *
平扫CT图像肾脏分割的深度学习算法研究;徐宏伟;《中国优秀硕士学位论文全文数据库医药卫生科技辑》(第(2021)02期);E076-27 *
深度学习结合影像组学的肝脏肿瘤CT分割;刘云鹏 等;《中国图象图形学报》;第25卷(第10期);2128-2141 *
门控多层融合的实时语义分割;张灿龙 等;《计算机辅助设计与图形学学报》;第32卷(第09期);1442-1449 *

Also Published As

Publication number Publication date
CN112767407A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112767407B (en) CT image kidney tumor segmentation method based on cascade gating 3DUnet model
CN110136157B (en) A deep learning-based method for vessel wall segmentation in 3D carotid ultrasound images
CN109035255B (en) A segmentation method of aorta with dissection in CT images based on convolutional neural network
Ahmad et al. Deep belief network modeling for automatic liver segmentation
Dogan et al. A two-phase approach using mask R-CNN and 3D U-Net for high-accuracy automatic segmentation of pancreas in CT imaging
CN104599270B (en) A kind of Ultrasound Image of Breast Tumor dividing method based on improvement level set algorithm
CN102800089B (en) Main carotid artery blood vessel extraction and thickness measuring method based on neck ultrasound images
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN110047082A (en) Pancreatic Neuroendocrine Tumors automatic division method and system based on deep learning
Chen et al. Pathological lung segmentation in chest CT images based on improved random walker
Ye et al. Medical image diagnosis of prostate tumor based on PSP-Net+ VGG16 deep learning network
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN114998265A (en) A Liver Tumor Segmentation Method Based on Improved U-Net
CN110706225A (en) Tumor identification system based on artificial intelligence
Fan et al. Lung nodule detection based on 3D convolutional neural networks
CN116630680B (en) Dual-mode image classification method and system combining X-ray photography and ultrasound
CN114529505A (en) Breast lesion risk assessment system based on deep learning
CN109003280A (en) Inner membrance dividing method in a kind of blood vessel of binary channels intravascular ultrasound image
Honghan et al. Rms-se-unet: A segmentation method for tumors in breast ultrasound images
Mastouri et al. A morphological operation-based approach for Sub-pleural lung nodule detection from CT images
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
CN113160208A (en) Liver lesion image segmentation method based on cascade hybrid network
CN118351300A (en) Automatic delineation method and system of critical organs based on U-Net model
Astaraki et al. Fully automatic segmentation of gross target volume and organs-at-risk for radiotherapy planning of nasopharyngeal carcinoma

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant