CN106780498A - Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel - Google Patents
Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel Download PDFInfo
- Publication number
- CN106780498A CN106780498A CN201611085781.4A CN201611085781A CN106780498A CN 106780498 A CN106780498 A CN 106780498A CN 201611085781 A CN201611085781 A CN 201611085781A CN 106780498 A CN106780498 A CN 106780498A
- Authority
- CN
- China
- Prior art keywords
- layer
- pixel
- theta
- image
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 210000000981 epithelium Anatomy 0.000 title claims abstract description 34
- 239000011159 matrix material Substances 0.000 title claims abstract description 17
- 230000008520 organization Effects 0.000 title abstract description 5
- 230000001575 pathological effect Effects 0.000 claims abstract description 53
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 210000001519 tissue Anatomy 0.000 claims description 80
- 238000010606 normalization Methods 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 21
- 230000004913 activation Effects 0.000 claims description 14
- 230000004044 response Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000007477 logistic regression Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims 1
- 238000005070 sampling Methods 0.000 claims 1
- 230000011218 segmentation Effects 0.000 abstract description 32
- 230000000694 effects Effects 0.000 abstract description 9
- 238000011160 research Methods 0.000 abstract description 4
- 238000002474 experimental method Methods 0.000 abstract description 3
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 239000003086 colorant Substances 0.000 abstract 1
- 238000010276 construction Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 18
- 238000012706 support-vector machine Methods 0.000 description 6
- 238000010186 staining Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 208000026310 Breast neoplasm Diseases 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 206010006187 Breast cancer Diseases 0.000 description 2
- WZUVPPKBWHMQCE-UHFFFAOYSA-N Haematoxylin Chemical compound C12=CC(O)=C(O)C=C2CC2(O)C1C1=CC=C(O)C(O)=C1OC2 WZUVPPKBWHMQCE-UHFFFAOYSA-N 0.000 description 2
- 206010028980 Neoplasm Diseases 0.000 description 2
- 210000000481 breast Anatomy 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000004043 dyeing Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 210000004027 cell Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- YQGOJNYOYNNSMM-UHFFFAOYSA-N eosin Chemical compound [Na+].OC(=O)C1=CC=CC=C1C1=C2C=C(Br)C(=O)C(Br)=C2OC2=C(Br)C(O)=C(Br)C=C21 YQGOJNYOYNNSMM-UHFFFAOYSA-N 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002546 full scan Methods 0.000 description 1
- 238000007490 hematoxylin and eosin (H&E) staining Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000023886 lateral inhibition Effects 0.000 description 1
- 238000002493 microarray Methods 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及病理图像信息处理技术领域,特别是基于逐像素点深度卷积网络上皮和基质组织自动分割方法。The invention relates to the technical field of pathological image information processing, in particular to a method for automatic segmentation of epithelial and matrix tissues based on a pixel-by-pixel deep convolution network.
背景技术Background technique
上皮组织和基质是乳房组织中的两类基本组织。80%的乳腺肿瘤起源于乳腺中的上皮组织,所以现在有一些学者致力于将计算机辅助诊断系统应用于对病理图像中的上皮和基质组织的异质性分析。自动的鉴别上皮和基质组织是量化这种异质性的前提,这样才使得对上皮细胞核进行单独分析成为可能。然而,由于病理组织图像所具有的复杂性,成功的将两类组织分离是一个具有挑战性的课题。Epithelial tissue and stroma are two basic types of tissue in breast tissue. 80% of breast tumors originate from the epithelial tissue in the breast, so some scholars are now committed to applying computer-aided diagnosis systems to the heterogeneity analysis of epithelial and stromal tissues in pathological images. Automated differentiation of epithelial and stromal tissues is a prerequisite for quantifying this heterogeneity, thus enabling the separate analysis of epithelial nuclei. However, due to the complexity of pathological tissue images, it is a challenging task to successfully separate the two types of tissue.
1)名副其实的大数据:1) Big data worthy of the name:
对于一张完整的病理组织全扫描切片,其尺寸约为100000×700000个像素点,存储在计算机上需要占用1.43个G的硬盘空间,这种高分辨率、大尺度图像对计算机硬件和图像分析算法都是非常具有挑战性的。For a complete full-scan section of pathological tissue, its size is about 100,000×700,000 pixels, and it takes up 1.43 G of hard disk space to store it on a computer. This high-resolution, large-scale image is critical to computer hardware and image analysis. Algorithms are very challenging.
2)病理组织结构类型复杂,而且形态差异很大2) The types of pathological tissue structure are complex, and the morphology varies greatly
一张病理切片具有众多的病理结构类型,形态各异。即便是相同的组织,其结构、形态也会千奇百怪。因此难以用一个固定的模型来描述,又大大的提高了对模型鲁棒性的要求。A pathological slice has many types of pathological structures with different shapes. Even if it is the same organization, its structure and shape will be varied. Therefore, it is difficult to describe it with a fixed model, and the requirements for the robustness of the model are greatly improved.
3)不同病理等级其组织异质性高3) Different pathological grades have high tissue heterogeneity
随着癌症等级的提高,正常组织的边界不断被癌细胞腐蚀,上皮和基质组织之间的边界信息越来越模糊。而模糊的边界又提高了分割模型的准确性要求。As the grade of cancer increases, the boundaries of normal tissues are continuously eroded by cancer cells, and the boundary information between epithelial and stromal tissues becomes increasingly blurred. And the fuzzy boundary raises the accuracy requirement of the segmentation model.
4)其他挑战4) Other challenges
组织图像的背景复杂、噪声大,存在染色不均匀性和成像质量的问题。The background of tissue images is complex and noisy, and there are problems with uneven staining and imaging quality.
由于H&E染色(苏木精-伊红染色)的病理图像能体现病理组织复杂的形态特征,从而在临床中被广泛使用。但是在H&E图像中,不仅背景复杂、图像噪声大,还存在由于切片染色制作过程中产生的染色不均匀,不正确染色等等问题。此外不同的扫描仪成像以及成像质量等问题。这些方面都会对图像处理分析算法带来巨大挑战。Because the pathological images of H&E staining (hematoxylin-eosin staining) can reflect the complex morphological characteristics of pathological tissues, it is widely used in clinical practice. However, in the H&E image, not only the background is complex and the image noise is large, but also there are problems such as uneven staining and incorrect staining caused by the staining process of the slice. In addition, different scanner imaging and imaging quality and other issues. These aspects will bring great challenges to image processing and analysis algorithms.
虽然存在着上述的挑战,依然有着不少学者在病理图像的上皮和基质组织自动分割中做出了贡献,推动研究的发展。Despite the above-mentioned challenges, there are still many scholars who have made contributions to the automatic segmentation of epithelial and stromal tissues in pathological images and promoted the development of research.
与传统的方法不同,深度学习在大量数据的基础上通过组合低层特征来形成更加抽象的高层次的特征。随着深度学习和大数据分析研究的不断深入,使人们的研究目标从简单的图像转变为复杂的大型的图像。而病理组织学图像所具有的复杂性正好符合这一点。Different from traditional methods, deep learning forms more abstract high-level features by combining low-level features on the basis of large amounts of data. With the continuous deepening of deep learning and big data analysis research, people's research goals have changed from simple images to complex large-scale images. The complexity of histopathological images fits this point.
发明内容Contents of the invention
本发明所要解决的技术问题是克服现有技术的不足,而提供一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法,与基于块的上皮和基质组织分割方法相比,无论是从定性结果还是定量结果来看,分类的准确率都得到了较大的提升。The technical problem to be solved by the present invention is to overcome the deficiencies of the prior art, and provide a method for automatic segmentation of epithelial and stromal tissue based on pixel-by-pixel deep convolutional networks. Compared with block-based epithelial and stromal tissue segmentation methods, no matter Whether it is from the qualitative or quantitative results, the classification accuracy has been greatly improved.
本发明为解决上述技术问题采用以下技术方案:The present invention adopts the following technical solutions for solving the problems of the technologies described above:
根据本发明提出的一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法,包括以下步骤:A method for automatic segmentation of epithelial and stromal tissue based on a pixel-by-pixel deep convolutional network proposed according to the present invention comprises the following steps:
步骤1、对所有病理图像进行预处理操作,去除掉病理图像与病理图像之间的颜色亮度差异;Step 1. Perform preprocessing operations on all pathological images to remove the difference in color brightness between the pathological images and the pathological images;
步骤2、随机选取预处理后的部分病理图像作为训练样本,其余作为测试样本;Step 2. Randomly select part of the preprocessed pathological images as training samples, and the rest as test samples;
步骤3、根据人工标注的组织区域图,从训练样本中的上皮和基质组织内部选取块;Step 3. According to the manually marked tissue area map, select a block from the epithelial and stromal tissue in the training sample;
步骤4、根据人工标注的组织区域图,从训练样本中的上皮和基质组织边缘选取块;Step 4, according to the manually marked tissue area map, select a block from the edge of the epithelial and matrix tissue in the training sample;
步骤5、将步骤3与步骤4得到的块进行整合并随机分为训练集与测试集;Step 5. Integrate the blocks obtained in step 3 and step 4 and randomly divide them into a training set and a test set;
步骤6、构建一个深度卷积神经网络模型DCNN,该模型包含了卷积层、池化层、线性纠正函数激活函数、局部响应归一化层以及分类器;采用步骤5中的训练集和测试集训练该深度卷积神经网络模型;Step 6. Construct a deep convolutional neural network model DCNN, which includes a convolutional layer, a pooling layer, a linear correction function activation function, a local response normalization layer, and a classifier; use the training set and test set in step 5 Set to train the deep convolutional neural network model;
步骤7、取出步骤2的测试样本中的一张病理图像,以病理图像中每个点为中心,构造一个Q×Q的块;其中,Q为深度卷积神经网络输入尺寸的大小;Step 7, take out a pathological image in the test sample in step 2, and construct a block of Q×Q with each point in the pathological image as the center; where, Q is the size of the input size of the deep convolutional neural network;
步骤8、将步骤7中构造的块输入到步骤6训练好的深度卷积神经网络模型中,得到分类结果。Step 8. Input the blocks constructed in step 7 into the deep convolutional neural network model trained in step 6 to obtain classification results.
作为本发明所述的一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法进一步优化方案,根据步骤8得到的分类结果进行伪彩色。As a further optimization scheme of the method for automatic segmentation of epithelial and stromal tissue based on a pixel-by-pixel deep convolutional network according to the present invention, false color is performed according to the classification result obtained in step 8.
作为本发明所述的一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法进一步优化方案,Q为32。As a further optimization scheme of the method for automatic segmentation of epithelial and stromal tissue based on a pixel-by-pixel deep convolutional network according to the present invention, Q is 32.
作为本发明所述的一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法进一步优化方案,所述步骤4具体如下:根据人工标注的组织区域图,找到训练样本中的上皮和基质组织的边界线,对边界线进行膨胀操作得到边界线附近的点的坐标,以这些点为中心构建32×32的块,若中心点落在上皮组织中,则将该块认为是上皮组织小块,反之则是基质组织小块。As a further optimization scheme of the method for automatic segmentation of epithelial and stromal tissues based on a pixel-by-pixel deep convolutional network described in the present invention, the step 4 is specifically as follows: According to the manually labeled tissue region map, find the epithelial and stromal tissue in the training sample. The boundary line of the matrix tissue, the expansion operation is performed on the boundary line to obtain the coordinates of the points near the boundary line, and a 32×32 block is constructed with these points as the center. If the center point falls in the epithelial tissue, the block is considered as the epithelial tissue Small pieces, and vice versa, small pieces of stromal tissue.
作为本发明所述的一种基于逐像素点深度卷积网络上皮和基质组织自动分割的方法进一步优化方案,所述步骤6中构建一个深度卷积神经网络模型DCNN,具体如下:As a further optimization scheme of the method for automatic segmentation of epithelial and stromal tissue based on a pixel-by-pixel deep convolutional network described in the present invention, a deep convolutional neural network model DCNN is constructed in the step 6, as follows:
采用Alex成功区分CIFAR-10数据时所使用的模型中的权重矩阵来初始化深度卷积神经网络;Initialize the deep convolutional neural network with the weight matrix in the model used by Alex to successfully distinguish the CIFAR-10 data;
深度卷积神经网络的具体结构:The specific structure of the deep convolutional neural network:
1)卷积层1) Convolution layer
假设滤波器组为每个输入大小为w1-1×w1-1的块通过m1×ml的滤波器滑过整张图像的局部感受域,并与每个局部感受域进行卷积操作,并输出结果;个滤波器一共生成个特征映射图,且每个映射图的大小为(wl-1-ml+1)×(wl-1-ml+1),这个线性滤波表示为其中,是一个l层的一个ml×ml的滤波器,ml代表网络结构第l层中滤波器的大小, 是第l层滤波器组Wl中的滤波器的个数;Suppose the filter bank is Each input block of size w 1-1 × w 1-1 Slide through the local receptive field of the entire image through the filter of m 1 ×m l , perform convolution operation with each local receptive field, and output the result; A total of filters are generated feature maps, and the size of each map is (w l-1 -m l +1)×(w l-1 -m l +1), this linear filter is expressed as in, is an m l × m l filter of an l layer, m l represents the size of the filter in the l layer of the network structure, is the number of filters in the l-th layer filter bank W l ;
2)线性纠正函数激活函数的表达式如下:2) The expression of the linear correction function activation function is as follows:
3)池化层3) Pooling layer
池化层的操作是在上一层卷积特征映射后进行一个下采样的金字塔操作,在局部的感受域范围内,提取其最大值或平均值作为下一层的特征值,非线性操作后,图像的特征map尺寸大小变为:The operation of the pooling layer is to perform a downsampling pyramid operation after the convolutional feature map of the previous layer, and extract its maximum value or average value as the feature value of the next layer within the range of the local receptive field. After the nonlinear operation , the feature map size of the image becomes:
其中,s是池化层操作的尺寸;Among them, s is the size of the pooling layer operation;
4)局部响应归一化层4) Local response normalization layer
用于局部做减和做除并归一化;Used for partial subtraction and division and normalization;
5)输出层5) Output layer
整个网络的最后一层就是输出层,输出层就是一个分类器,分类器的输入是神经网络的最后一层,分类器的输出是类别数,在深度卷积神经网络中,二分类的Softmax分类器的逻辑回归模型为:The last layer of the entire network is the output layer, and the output layer is a classifier. The input of the classifier is the last layer of the neural network, and the output of the classifier is the number of categories. In the deep convolutional neural network, the softmax classification of the two classifications The logistic regression model of the device is:
其中,x是样本的特征向量,T为转置符号,θ是参数;Among them, x is the feature vector of the sample, T is the transpose symbol, and θ is the parameter;
Softmax分类器的输入是DCNN网络的最后一层的输出,通过最小化如下的损失函数J(θ)得到Softmax分类器的参数θ;The input of the Softmax classifier is the output of the last layer of the DCNN network, and the parameter θ of the Softmax classifier is obtained by minimizing the following loss function J(θ);
其中,m为样本数量,y(i)为第i个样本标记,x(i)为第i个样本的特征向量,k为类别数;Among them, m is the number of samples, y (i) is the i-th sample label, x (i) is the feature vector of the i-th sample, and k is the number of categories;
θ代表所有的模型参数,如下所示:θ stands for all model parameters as follows:
其中,是分类为第j类时所采用的参数,同时也是θ这个所有模型参数中的第j行,0<j<k+1且j为整数;in, It is the parameter used when it is classified into the jth class, and it is also the jth line of all model parameters of θ, 0<j<k+1 and j is an integer;
根据得到的Softmax的参数θ,每一个通过滑动窗得到的图像块都会首先进行DCNN的前向传播得到特征向量x(i),再被送到逻辑回归模型中得到一个0~1之间的概率值,最终图像块的类别为:According to the obtained Softmax parameter θ, each image block obtained through the sliding window will first be forwarded by DCNN to obtain the feature vector x (i) , and then sent to the logistic regression model to obtain a probability between 0 and 1 value, the category of the final image patch for:
其中,e为自然底数,k=2,是分类为第l类时所采用的参数,同时也是θ这个所有模型参数中的第l行。Wherein, e is a natural base, k=2, It is the parameter used when it is classified into the l-th category, and it is also the l-th row among all model parameters of θ.
本发明采用以上技术方案与现有技术相比,具有以下技术效果:Compared with the prior art, the present invention adopts the above technical scheme and has the following technical effects:
(1)在同样的实验条件下,本发明方法的检测准确率比基于像素块的分割方法准确率高;(1) under the same experimental conditions, the detection accuracy rate of the inventive method is higher than the segmentation method accuracy rate based on pixel blocks;
(2)本发明旨在对每个像素点分类,避免了基于像素块的分割中存在的不同种类的像素点被划分为一个块的问题;(2) The present invention aims at classifying each pixel, avoiding the problem that different types of pixels that exist in the segmentation based on pixel blocks are divided into one block;
(3)本发明方法针对边缘组织,采取镜像边缘像素的方法来扩充边缘,从而来对它们进行分类;(3) the method of the present invention is aimed at edge tissue, adopts the method for mirror image edge pixel to expand edge, thereby they are classified;
(4)本发明方法在分割结果的同时在原图上做出展示,便临床医生直接观看,并在此基础上做出后续诊断。(4) The method of the present invention displays the segmentation result on the original image at the same time, so that the clinician can directly watch it and make a follow-up diagnosis on this basis.
附图说明Description of drawings
图1为深度卷及神经网络的结构图。Figure 1 is a structural diagram of a deep volume and a neural network.
图2为深度卷及神经网络的实验整体流程图;其中,(a)为原始的H&E病理图像;(b)为从(a)中通过滑动窗口取出的32x32的小块;(c)为将小块输入到整个深度卷积神经网络(示意图)中,并得到分类结果;(d)为根据(c)中分类结果对(b)中的小块的中心点像素进行伪彩色染色;(e)为当整张图片所有的滑动小块都被染色之后得到的结果,作为分割结果。Figure 2 is the overall flow chart of the deep volume and neural network experiment; where (a) is the original H&E pathological image; (b) is the 32x32 small block taken out from (a) through the sliding window; (c) is the The small block is input into the entire deep convolutional neural network (schematic diagram), and the classification result is obtained; (d) performs pseudo-color dyeing on the center point pixel of the small block in (b) according to the classification result in (c); (e ) is the result obtained when all the sliding blocks of the entire image are colored, as the segmentation result.
图3是本发明中取出组织边缘的小块方法的示意图;其中,(a)为原始的H&E病理图像;(b)为由病理医生人工标注的结果(深灰色为上皮,浅灰色为基质,黑色为不关心的区域);(c)根据人工标注,得到上皮和基质的分割线并作膨胀处理;(d)在分割线所在区域随机取点,以该点为中心构建小块;(e)为基质小块;(f)为上皮小块。Fig. 3 is a schematic diagram of the method for taking out small pieces of tissue edge in the present invention; wherein, (a) is the original H&E pathological image; (b) is the result manually marked by the pathologist (dark gray is the epithelium, light gray is the stroma, Black is the area of no concern); (c) According to manual labeling, the dividing line between epithelium and stroma is obtained and expanded; (d) Random points are taken in the area where the dividing line is located, and a small block is constructed around this point; (e ) is a small piece of matrix; (f) is a small piece of epithelium.
图4是不同的模型对病理图像中上皮和基质组织分割后的伪彩色结果;其中,(a)是原始的病理图像,(b)是由病理专家精确标注的人工标注,(c)是由本发明提出的基于逐像素点和深度卷积神经网络的方法;(d)-(i)分别是SW-SVM,SW-SMC,Ncut-SVM,Ncut-SMC,SLIC-SVM,DCNN-SLIC-SMC得到的伪彩色分割结果图。Figure 4 is the pseudo-color results of different models for the segmentation of epithelial and stromal tissues in pathological images; among them, (a) is the original pathological image, (b) is manually marked by pathologists, and (c) is the original pathological image. The method based on the pixel-by-pixel and deep convolutional neural network proposed by the invention; (d)-(i) are SW-SVM, SW-SMC, Ncut-SVM, Ncut-SMC, SLIC-SVM, DCNN-SLIC-SMC respectively The obtained pseudo-color segmentation result map.
图5a为本发明方法与现有基于像素块的分割方法在NKI数据集上的ROC曲线的对比。Fig. 5a is a comparison of the ROC curves of the method of the present invention and the existing segmentation method based on pixel blocks on the NKI dataset.
图5b为本发明方法与现有基于像素块的分割方法在VGH数据集上的ROC曲线的对比。Fig. 5b is a comparison of the ROC curves of the method of the present invention and the existing segmentation method based on pixel blocks on the VGH dataset.
具体实施方式detailed description
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图及具体实施例对本发明进行详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
步骤1、病理图像预处理操作,去除掉图像与图像之间的颜色亮度差异;Step 1, pathological image preprocessing operation, remove the color brightness difference between the image and the image;
该方法预先选取一幅病理图像作为目标图像,其他的病理图像在颜色标准化之后都将与目标图像具有相同的颜色分布。具体方法是将目标图像和待标准化病理图像从RGB颜色空间转换到LAB颜色空间,对三个通道的每一个像素的灰度值进行一个线性变换,随后将线性变换后的LAB颜色空间的待标准化病理图像还原为RGB颜色空间,便可以使待标准化病理图像和目标图像具有一样的颜色分布。This method preselects a pathological image as the target image, and other pathological images will have the same color distribution as the target image after color normalization. The specific method is to convert the target image and the pathological image to be standardized from the RGB color space to the LAB color space, perform a linear transformation on the gray value of each pixel of the three channels, and then convert the linearly transformed LAB color space to be standardized The pathological image is restored to the RGB color space, so that the pathological image to be standardized and the target image have the same color distribution.
步骤2、取出部分病理图像作为训练样本,其余作为测试样本;Step 2, take out part of pathological images as training samples, and the rest as test samples;
随机的选取数据中的图片,同时确保训练样本与测试样本完全分开。Randomly select pictures in the data, while ensuring that the training samples are completely separated from the test samples.
步骤3、根据专家标注,从上皮和基质组织内部选取图像块;Step 3. Select image blocks from inside the epithelium and stromal tissue according to expert annotation;
在病理图像中选取所有像素点都属于上皮组织或基质组织的图像块。关于组织图像块的选取,完全由拥有专业病理知识的临床医生在大幅切片图像中进行组织区域标记,程序会根据这些标记的区域从中选取边长为32个像素的正方形图像块。其中上皮组织中选取的块作为正样本,基质组织中选取的块作为负样本。In the pathological image, all the pixel points belong to the image block of epithelial tissue or stromal tissue. Regarding the selection of tissue image blocks, clinicians with professional pathological knowledge will mark tissue areas in large slice images, and the program will select square image blocks with a side length of 32 pixels from these marked areas. Among them, the block selected in the epithelial tissue is used as a positive sample, and the block selected in the stromal tissue is used as a negative sample.
步骤4、根据专家标注,从上皮和基质组织边缘选取图像块;Step 4. According to expert annotation, select image blocks from the edge of epithelial and stromal tissue;
根据专家标注,找到训练样本中的上皮和基质组织的边界,对边界线进行膨胀操作得到边界线附近的点的坐标。以这些点为中心构建32×32的小块,若中心点落在上皮组织中,则将该小块认为是上皮组织小块。反之则是基质组织小块;According to expert annotation, find the boundary between epithelial and stromal tissue in the training sample, and perform an expansion operation on the boundary line to obtain the coordinates of points near the boundary line. A small block of 32×32 is constructed centering on these points, and if the central point falls in the epithelial tissue, the small block is considered as a small block of epithelial tissue. On the contrary, it is a small piece of matrix tissue;
步骤5、将步骤3与步骤4得到的小块进行整合并随机分为训练集与测试集;Step 5. Integrate the small blocks obtained in step 3 and step 4 and randomly divide them into a training set and a test set;
通过随机筛选整合步骤3、4中得到的数据,其中组织内部小块:组织边缘小块比例大致为1:4。The data obtained in steps 3 and 4 were integrated by random screening, in which the ratio of small pieces inside the tissue: small pieces at the edge of the tissue was roughly 1:4.
步骤6、构建一个深度卷积神经网络模型(DCNN),模型包含了卷积层,线性纠正函数激活函数,池化层,局部响应归一化层以及最后的分类器;Step 6. Construct a deep convolutional neural network model (DCNN), which includes a convolutional layer, a linear correction function activation function, a pooling layer, a local response normalization layer and a final classifier;
深度卷积神经网络(DCNN)是人工神经网络的一种,它的权值共享网络结构使之更类似于生物神经网络,降低了网络模型的复杂度,减少了权值的数量。该优点在网络的输入是多维图像时表现的更为明显,使图像可以直接作为网络的输入,避免了传统识别算法中复杂的特征提取和数据重建过程。深度卷积网络是为识别二维形状而特殊设计的一个多层感知器,这种网络结构对平移、比例缩放、倾斜或者共他形式的变形具有高度不变性。Deep convolutional neural network (DCNN) is a kind of artificial neural network. Its weight sharing network structure makes it more similar to biological neural network, which reduces the complexity of the network model and the number of weights. This advantage is more obvious when the input of the network is a multi-dimensional image, so that the image can be directly used as the input of the network, avoiding the complicated feature extraction and data reconstruction process in the traditional recognition algorithm. A deep convolutional network is a multi-layer perceptron specially designed to recognize two-dimensional shapes. This network structure is highly invariant to translation, scaling, tilting, or other forms of deformation.
深度卷积神经网络模型性能的优劣在一定程度上取决于训练样本和初始的神经网络权重。采用随机初始化的方法容易陷于局部最优,所以这里使用采用知名学者Alex成功区分CIFAR-10数据时所使用的模型中的权重矩阵来初始化本发明的深度卷积神经网络。The performance of the deep convolutional neural network model depends to a certain extent on the training samples and the initial neural network weights. The method of random initialization is easy to fall into a local optimum, so the weight matrix in the model used by the well-known scholar Alex to successfully distinguish CIFAR-10 data is used here to initialize the deep convolutional neural network of the present invention.
下面介绍下深度卷积神经网络的具体结构。The specific structure of the deep convolutional neural network is introduced below.
1)卷积层1) Convolution layer
假设滤波器组为每一个(其中)是一个l层的一个ml×ml的滤波器,是第l层滤波器组Wl中的滤波器的个数。每个输入大小为wl-1×wl-1的块通过ml×ml的滤波器滑过整张图像的局部感受域,并与每个局部感受域进行卷积操作,并输出结果。个滤波器一共生成个特征映射图,且每个映射图的大小为(wl-1-ml+1)×(wl-1-ml+1),这个线性滤波可以被简单的表示为 Suppose the filter bank is Every (in ) is an m l ×m l filter of an l layer, is the number of filters in the filter bank W l of layer l. Each input block of size w l-1 × w l-1 Slide through the local receptive field of the whole image through the filter of m l × m l , perform convolution operation with each local receptive field, and output the result. A total of filters are generated feature maps, and the size of each map is (w l-1 -m l +1)×(w l-1 -m l +1), this linear filter can be simply expressed as
2)ReLu激活函数2) ReLu activation function
为了模仿人脑神经元的工作原理,也为了更好地拟合表示我们的数据信息,对每一层的线性滤波后得到的特征映射图,都要通过一个非线性的激活函数进行激活,在这里使用Relu激活函数,表达式如下:In order to imitate the working principle of human brain neurons, and to better fit and represent our data information, the feature map obtained after each layer of linear filtering must be activated by a nonlinear activation function. The Relu activation function is used here, and the expression is as follows:
相比较于传统的sigmod激活函数,Relu激活函数具有不饱和性,在训练梯度下降的时候能够更加快速地收敛,从而加快整个网络的训练速度。Compared with the traditional sigmod activation function, the Relu activation function is not saturated, and can converge more quickly when the training gradient drops, thereby speeding up the training speed of the entire network.
3)Pooling层(S)3) Pooling layer (S)
Pooling层的操作是在上一层卷积特征map后进行一个下采样的金字塔操作,在局部的感受域范围内,提取其最大值(或平均值)作为下一层的特征值,所以在pooling层是没有参数存在的,只需要做一个非线性操作即可,这样做的原因在于,在一副有意义的图像中,局部区域的信息是有冗余的,而我们要做的就是提取能代表和反映其最大响应的特征。在pooling操作后,图像的特征map尺寸大小变为:The operation of the Pooling layer is to perform a downsampling pyramid operation after the convolution feature map of the previous layer, and extract its maximum value (or average value) as the feature value of the next layer within the local receptive field range, so in the pooling layer The layer has no parameters, and only needs to do a nonlinear operation. The reason for this is that in a meaningful image, the information of the local area is redundant, and what we need to do is to extract the energy Characteristic that represents and reflects its maximum response. After the pooling operation, the feature map size of the image becomes:
其中s是pooling操作的尺寸。where s is the size of the pooling operation.
4)局部响应归一化层4) Local response normalization layer
该模块主要进行的是局部做减和做除(local subtractive and divisivenormalizations)并归一化,它会迫使在特征map中的相邻特征进行局部竞争,还会迫使在不同特征maps的同一空间位置的特征进行竞争。在一个给定的位置进行减法归一化操作,实际上就是该位置的值减去邻域各像素的加权后的值,权值是为了区分与该位置距离不同影响不同,权值可以由一个高斯加权窗来确定。除法归一化实际上先计算每一个特征maps在同一个空间位置的邻域的加权和的值,然后取所有特征maps这个值的均值,然后每个特征map该位置的值被重新计算为该点的值除以max(那个均值,该点在该map的邻域的加权和的值)。分母表示的是在所有特征maps的同一个空间邻域的加权标准差。实际上如果对于一个图像的话,就是均值和方差归一化,也就是特征归一化。这个实际上是由计算神经科学模型启发得到的。局部响应归一化层层模仿生物神经系统的侧抑制机制,对局部神经元的活动创建竞争机制,使得响应比较大的值相对更大,提高模型泛化能力。实施方式就是在每个给定的位置进行减法归一化操作,实际上就是该位置的值减去邻域各像素的加权后的值,权值是为了区分与该位置距离不同影响不同,权值可以由一个高斯加权窗来确定。This module mainly performs local subtractive and divisive normalizations and normalization. It will force the adjacent features in the feature map to compete locally, and it will also force the same spatial position of different feature maps. features to compete. Performing a subtraction normalization operation at a given position is actually the value of the position minus the weighted value of each pixel in the neighborhood. The weight is to distinguish the different effects of the distance from the position. The weight can be determined by a Gaussian weighted window to determine. The division normalization actually first calculates the weighted sum of each feature map in the neighborhood of the same spatial position, and then takes the mean value of all feature maps, and then the value of each feature map at this position is recalculated as the The value of the point is divided by max (the mean, the value of the weighted sum of the point's neighborhood in the map). The denominator represents the weighted standard deviation in the same spatial neighborhood of all feature maps. In fact, for an image, it is mean and variance normalization, that is, feature normalization. This one is actually inspired by computational neuroscience models. Local response normalization imitates the lateral inhibition mechanism of the biological nervous system layer by layer, and creates a competition mechanism for the activities of local neurons, so that the response is relatively large and the value is relatively large, improving the generalization ability of the model. The implementation method is to perform a subtraction normalization operation at each given position, which is actually the value of the position minus the weighted value of each pixel in the neighborhood. The weight is to distinguish the different effects of different distances from the position. Values can be determined by a Gaussian weighted window.
5)输出层5) Output layer
整个网络的最后一层就是输出层,输出层就是一个分类器,分类器的输入是神经网络的最后一层,分类器的输出是类别数,在深度卷积神经网络中,二分类的Softmax分类器的逻辑回归模型为:The last layer of the entire network is the output layer, and the output layer is a classifier. The input of the classifier is the last layer of the neural network, and the output of the classifier is the number of categories. In the deep convolutional neural network, the softmax classification of the two classifications The logistic regression model of the device is:
其中,训练集由m个已标记的样本构成:{(x(1),y(1)),…,(xm,ym)},x是样本的特征向量,T为转置符号,θ是参数;Among them, the training set consists of m labeled samples: {(x (1) ,y (1) ),…,(x m ,y m )}, x is the feature vector of the sample, T is the transposed symbol, θ is the parameter;
Softmax分类器的输入是DCNN网络的最后一层的输出,通过最小化如下的损失函数J(θ)得到Softmax分类器的参数θ;The input of the Softmax classifier is the output of the last layer of the DCNN network, and the parameter θ of the Softmax classifier is obtained by minimizing the following loss function J(θ);
其中,m为样本数量,y(i)为第i个样本标记,x(i)为第i个样本的特征向量;Among them, m is the number of samples, y (i) is the i-th sample label, and x (i) is the feature vector of the i-th sample;
为了方便起见,这里使用符号θ代表所有的模型参数,如下所示:For convenience, the symbol θ is used here to represent all model parameters, as follows:
其中θ下标是具体以第几类,上标T是转置符号,k为类别总数;Among them, the subscript θ is the specific category, the superscript T is the transposition symbol, and k is the total number of categories;
根据得到的Softmax的参数θ,每一个通过滑动窗得到的图像块都会首先进行DCNN的前向传播得到特征向量x(i),再被送到逻辑回归模型中得到一个0~1之间的概率值,最终图像块的类别为:According to the obtained Softmax parameter θ, each image block obtained through the sliding window will first be forwarded by DCNN to obtain the feature vector x (i) , and then sent to the logistic regression model to obtain a probability between 0 and 1 value, the category of the final image patch for:
其中,e为自然底数,k=2。是分类为第j类时所采用的参数,同时也是θ这个所有模型参数中的第j行。是分类为第l类时所采用的参数,同时也是θ这个所有模型参数中的第l行。Wherein, e is a natural base, and k=2. is the parameter used when classifying as the jth class, and it is also the jth line of all model parameters of θ. It is the parameter used when it is classified into the l-th category, and it is also the l-th row among all model parameters of θ.
步骤7、取出步骤2中的测试样本中的一张病理图像,以图像中每个点为中心,构造一个32×32的小块;Step 7, take out a pathological image in the test sample in step 2, and construct a small block of 32×32 with each point in the image as the center;
以每个像素点为中心,往上取15个像素点,往下取16个像素点从而构成一个32×32的小块。针对边缘组织,为了方便取块,采取镜像边缘像素的方法来扩充边缘,从而来对它们进行分类;Taking each pixel as the center, take 15 pixels up and 16 pixels down to form a small block of 32×32. For the edge organization, in order to facilitate block selection, the method of mirroring edge pixels is adopted to expand the edge, so as to classify them;
步骤8、将步骤7中的小块输入到预先训练好的深度卷积神经网络中,得到分类结果。并根据分类结果进行伪彩色;Step 8. Input the small block in step 7 into the pre-trained deep convolutional neural network to obtain the classification result. And perform pseudo-coloring according to the classification results;
将步骤7中取出的小块输入到步骤6中训练好的深度卷积神经网络模型中,并得到最终的输出结果。如果结果为0,则认为该小块的中心像素点为上皮组织像素点,将其染成深灰色。如果结果为1,则认为该小块的中心像素点为基质组织像素点,将其染成浅灰色。同时找到专家标记中的黑色区域位置,将伪彩色结果中的同样位置染成黑色。Input the small block taken out in step 7 into the deep convolutional neural network model trained in step 6, and get the final output result. If the result is 0, it is considered that the central pixel of the small block is an epithelial tissue pixel, and it is dyed dark gray. If the result is 1, it is considered that the central pixel of the small block is a stromal tissue pixel, and it is dyed light gray. At the same time find the location of the black area in the expert mark and color the same location in the false color result black.
为了便于公众理解本发明技术方案,下面给出一个具体实施例。In order to facilitate the public to understand the technical solution of the present invention, a specific embodiment is given below.
本实施例将本发明所提供的技术方案应用在苏木精和伊红染色(H&E)的乳腺癌组织图像集上。本发明方法在两个数据库中进行了测试,分别是:荷兰癌症研究所(NKI)和温哥华综合医院(VGH)两个机构分别提供的数据。它包括了由病理专家手动标记出上皮和基质组织的157张病理图像(NKI,106张;VGH,51张)。每张图像都是从20×光学分辨率的H&E染色的乳腺癌组织芯片(TMA)中裁剪出来的,图像尺寸为1128×720。In this embodiment, the technical solution provided by the present invention is applied to the breast cancer tissue image set stained with hematoxylin and eosin (H&E). The method of the present invention is tested in two databases, which are respectively: the data provided by the Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH). It included 157 pathological images (NKI, 106; VGH, 51) with epithelial and stromal tissues manually labeled by expert pathologists. Each image was cropped from an H&E-stained breast cancer tissue microarray (TMA) at 20× optical resolution, with an image size of 1128×720.
本实施例中,组织特征提取部分采取深度卷积神经网络,分类部分为softmax分类器,为了验证本发明的基于逐像素点深度卷积网络的上皮和基质组织分割方法的有效性,对比了另外几种常见的使用深度卷积神经网络提取小块特征的上皮和基质组织分割方法,包括SW-SVM(滑动窗口+支持向量机分类),SW-SMC(滑动窗口+softmax分类),Ncut-SVM(规范化图割+支持向量机分类),Ncut-SMC(规范化图割+softmax分类),SLIC-SVM(简单线性迭代聚类+支持向量机分类),SLIC-SMC(简单线性迭代聚类+softmax分类)。In this embodiment, the tissue feature extraction part adopts a deep convolutional neural network, and the classification part is a softmax classifier. In order to verify the effectiveness of the epithelial and stromal tissue segmentation method based on the pixel-by-pixel deep convolutional network of the present invention, other methods were compared. Several common epithelial and stromal tissue segmentation methods that use deep convolutional neural networks to extract small block features, including SW-SVM (sliding window + support vector machine classification), SW-SMC (sliding window + softmax classification), Ncut-SVM (normalized graph cut + support vector machine classification), Ncut-SMC (normalized graph cut + softmax classification), SLIC-SVM (simple linear iterative clustering + support vector machine classification), SLIC-SMC (simple linear iterative clustering + softmax Classification).
步骤1、病理图像预处理操作,去除掉图像与图像之间的颜色亮度差异;Step 1, pathological image preprocessing operation, remove the color brightness difference between the image and the image;
该方法预先选取一幅病理图像作为目标图像,其他的病理图像在颜色标准化之后都将与目标图像具有相同的颜色分布。具体方法是将目标图像和待标准化病理图像从RGB颜色空间转换到LAB颜色空间,对三个通道的每一个像素的灰度值进行一个线性变换,随后将线性变换后的LAB颜色空间的待标准化病理图像还原为RGB颜色空间,便可以使待标准化病理图像和目标图像具有一样的颜色分布。This method preselects a pathological image as the target image, and other pathological images will have the same color distribution as the target image after color normalization. The specific method is to convert the target image and the pathological image to be standardized from the RGB color space to the LAB color space, perform a linear transformation on the gray value of each pixel of the three channels, and then convert the linearly transformed LAB color space to be standardized The pathological image is restored to the RGB color space, so that the pathological image to be standardized and the target image have the same color distribution.
像素灰度值线性变化公式:在这里定义分别为LAB各通道所有像素灰度值的均方差和均值。Target为目标图像,original为标准化前的图像,mapped为标准化后的图像。Pixel gray value linear change formula: defined here are the mean square error and mean value of all pixel gray values in each channel of LAB, respectively. Target is the target image, original is the image before normalization, and mapped is the normalized image.
步骤2、取出部分病理图像作为训练样本,其余作为测试样本;Step 2, take out part of pathological images as training samples, and the rest as test samples;
随机的选取数据中的图片,同时确保训练样本与测试样本完全分开。Randomly select pictures in the data, while ensuring that the training samples are completely separated from the test samples.
步骤3、根据专家标注,从上皮和基质组织内部选取图像块;Step 3. Select image blocks from inside the epithelium and stromal tissue according to expert annotation;
在病理图像中选取所有像素点都属于上皮组织或基质组织的图像块。关于组织图像块的选取,完全由拥有专业病理知识的临床医生在大幅切片图像中进行组织区域标记,程序会根据这些标记的区域从中选取边长为32个像素的正方形图像块。其中上皮组织中选取的块作为正样本,基质组织中选取的块作为负样本。In the pathological image, all the pixel points belong to the image block of epithelial tissue or stromal tissue. Regarding the selection of tissue image blocks, clinicians with professional pathological knowledge will mark tissue areas in large slice images, and the program will select square image blocks with a side length of 32 pixels from these marked areas. Among them, the block selected in the epithelial tissue is used as a positive sample, and the block selected in the stromal tissue is used as a negative sample.
步骤4、根据专家标注,从上皮和基质组织边缘选取图像块;Step 4. According to expert annotation, select image blocks from the edge of epithelial and stromal tissue;
如图,根据专家标注(图3中的(b)),找到训练样本中的上皮和基质组织的边界,对边界线进行形态学的膨胀操作(图3中的(c))得到膨胀后的边界。从中得到属于该边界线的点的坐标。以这些点为中心构建32×32的小块,若中心点落在上皮组织中,则将该小块认为是上皮组织小块(图3中的(f))。反之则是基质组织小块(图3中的(e));为了更好的展示效果,将原始图像(图3中的(a))与膨胀后的边界(图3中的(c))图像融合得到边界示意图(图3中的(d))。As shown in the figure, according to the expert annotation ((b) in Figure 3), the boundary between the epithelial and stromal tissues in the training sample is found, and the morphological expansion operation is performed on the boundary line ((c) in Figure 3) to obtain the expanded boundary. From this the coordinates of the points belonging to the boundary line are obtained. A small block of 32×32 is constructed centering on these points, and if the central point falls in the epithelial tissue, the small block is considered as a small block of epithelial tissue ((f) in FIG. 3 ). On the contrary, it is a small piece of stromal tissue ((e) in Figure 3); for a better display effect, the original image ((a) in Figure 3) and the expanded boundary ((c) in Figure 3) The image is fused to obtain a schematic diagram of the boundary ((d) in Figure 3).
步骤5、将步骤3与步骤4得到的小块进行整合并随机分为训练集与测试集;Step 5. Integrate the small blocks obtained in step 3 and step 4 and randomly divide them into a training set and a test set;
通过随机筛选整合步骤3、4中得到的数据,其中组织内部小块:组织边缘小块比例大致为1:4。样本数量如表1所示。The data obtained in steps 3 and 4 were integrated by random screening, in which the ratio of small pieces inside the tissue: small pieces at the edge of the tissue was roughly 1:4. The sample size is shown in Table 1.
表1训练样本数量Table 1 Number of training samples
步骤6、构建一个深度卷积神经网络模型(DCNN),模型包含了卷积层,线性纠正函数激活函数,池化层,局部响应归一化层以及最后的分类器;Step 6. Construct a deep convolutional neural network model (DCNN), which includes a convolutional layer, a linear correction function activation function, a pooling layer, a local response normalization layer and a final classifier;
对于卷积神经网络,本发明所使用的框架是目前很热门的Caffe框架。网络结构如图1所示:For the convolutional neural network, the framework used in the present invention is the very popular Caffe framework at present. The network structure is shown in Figure 1:
第一层使用32个卷积核(conv)对图像进行卷积操作(卷积核大小Kernel size=5;步长Stride=1;图像边缘镜像填充像素Pad=2;)。The first layer uses 32 convolution kernels (conv) to perform convolution operations on the image (convolution kernel size Kernel size=5; step size Stride=1; image edge mirror fill pixel Pad=2;).
第二层使用最大值池化(pool)的方式对卷积结果进行下采样(池化核大小Kernelsize=3;步长Stride=2;图像边缘镜像填充像素Pad=0;)。The second layer uses the maximum pooling method to down-sample the convolution result (pooling kernel size Kernelsize=3; step size Stride=2; image edge mirror padding pixel Pad=0;).
随后使用ReLU激活函数与局部响应归一化(LRN)。Then use the ReLU activation function with Local Response Normalization (LRN).
第三层使用32个卷积核对图像进行卷积操作(卷积核大小Kernel size=5;步长Stride=1;图像边缘镜像填充像素Pad=2;)。The third layer uses 32 convolution kernels to perform convolution operations on the image (convolution kernel size Kernel size=5; step size Stride=1; image edge mirror padding pixel Pad=2;).
随后使用ReLU激活函数。Then use the ReLU activation function.
第四层使用最大值池化的方式对卷积结果进行下采样(池化核大小Kernel size=3;步长Stride=2;图像边缘镜像填充像素Pad=0;)。The fourth layer uses the maximum pooling method to down-sample the convolution result (pooling kernel size Kernel size=3; step size Stride=2; image edge mirror padding pixel Pad=0;).
随后使用局部响应归一化。Local response normalization was then used.
第五层使用64个卷积核对图像进行卷积操作(卷积核大小Kernel size=5;步长Stride=1;图像边缘镜像填充像素Pad=2;)。The fifth layer uses 64 convolution kernels to perform convolution operations on the image (convolution kernel size Kernel size=5; step size Stride=1; image edge mirror padding pixel Pad=2;).
随后使用ReLU激活函数。Then use the ReLU activation function.
第六层使用最大值池化的方式对卷积结果进行下采样(池化核大小Kernel size=3;步长Stride=2;图像边缘镜像填充像素Pad=0;)。The sixth layer uses the maximum pooling method to down-sample the convolution result (pooling kernel size Kernel size=3; step size Stride=2; image edge mirror padding pixel Pad=0;).
第七层使用64个全连接单元(ip)与上一层进行全连接操作。The seventh layer uses 64 fully connected units (ip) to perform fully connected operations with the previous layer.
第八次输出分类结果以及与真实值对比的loss值。The eighth time outputs the classification result and the loss value compared with the real value.
步骤7、取出步骤2中的测试样本中的一张病理图像,以图像中每个点为中心,构造一个32×32的小块;Step 7, take out a pathological image in the test sample in step 2, and construct a small block of 32×32 with each point in the image as the center;
以每个像素点为中心,往上取15个像素点,往下取16个像素点从而构成一个32×32的小块。针对边缘组织,为了方便取块,采取镜像边缘像素的方法来扩充边缘,从而来对它们进行分类;Taking each pixel as the center, take 15 pixels up and 16 pixels down to form a small block of 32×32. For the edge organization, in order to facilitate block selection, the method of mirroring edge pixels is adopted to expand the edge, so as to classify them;
步骤8、将步骤7中的小块输入到预先训练好的深度卷积神经网络中,得到分类结果。并根据分类结果进行伪彩色;Step 8. Input the small block in step 7 into the pre-trained deep convolutional neural network to obtain the classification result. And perform pseudo-coloring according to the classification results;
将步骤7中取出的小块输入到步骤6中训练好的深度卷积神经网络模型中,并得到最终的输出结果。如果结果为0,则认为该小块的中心像素点为上皮组织像素点,将其染成深灰色。如果结果为1,则认为该小块的中心像素点为基质组织像素点,将其染成浅灰色。同时找到专家标记中的黑色区域位置,将伪彩色结果中的同样位置染成黑色。Input the small block taken out in step 7 into the deep convolutional neural network model trained in step 6, and get the final output result. If the result is 0, it is considered that the central pixel of the small block is an epithelial tissue pixel, and it is dyed dark gray. If the result is 1, it is considered that the central pixel of the small block is a stromal tissue pixel, and it is dyed light gray. At the same time find the location of the black area in the expert mark and color the same location in the false color result black.
图2为深度卷及神经网络的实验整体流程图;其中,(a)为原始的H&E病理图像;(b)为从(a)中通过滑动窗口取出的32x32的小块;(c)为将小块输入到整个深度卷积神经网络(示意图)中,并得到分类结果;(d)为根据(c)中分类结果对(b)中的小块的中心点像素进行伪彩色染色;(e)为当整张图片所有的滑动小块都被染色之后得到的结果,作为分割结果。Figure 2 is the overall flow chart of the deep volume and neural network experiment; where (a) is the original H&E pathological image; (b) is the 32x32 small block taken out from (a) through the sliding window; (c) is the The small block is input into the entire deep convolutional neural network (schematic diagram), and the classification result is obtained; (d) performs pseudo-color dyeing on the center point pixel of the small block in (b) according to the classification result in (c); (e ) is the result obtained when all the sliding blocks of the entire image are colored, as the segmentation result.
为了验证本发明的基于逐像素点深度卷积网络的上皮和基质组织分割的有效性,对比了另外几种常见的使用深度卷积神经网络提取小块特征的基于像素块的上皮和基质组织分割方法,包括SW-SVM(滑动窗口+支持向量机分类),SW-SMC(滑动窗口+softmax分类),Ncut-SVM(规范化图割+支持向量机分类),Ncut-SMC(规范化图割+softmax分类),SLIC-SVM(简单线性迭代聚类+支持向量机分类),SLIC-SMC(简单线性迭代聚类+softmax分类)。In order to verify the effectiveness of the segmentation of epithelial and stromal tissue based on the pixel-by-pixel deep convolutional network of the present invention, several other common segmentations of epithelial and stromal tissue based on pixel blocks that use deep convolutional neural networks to extract small block features are compared. Methods, including SW-SVM (sliding window + support vector machine classification), SW-SMC (sliding window + softmax classification), Ncut-SVM (normalized graph cut + support vector machine classification), Ncut-SMC (normalized graph cut + softmax classification), SLIC-SVM (simple linear iterative clustering + support vector machine classification), SLIC-SMC (simple linear iterative clustering + softmax classification).
图4展示了不同的模型对病理图像中上皮和基质组织分割后的伪彩色结果。其中,图4中的(a)是原始的病理图像;图4中的(b)是由病理专家精确标注的人工标注,其中深灰色部分代表着上皮组织,浅灰色部分代表着基质组织,黑色部分为背景区域,即不被关注的区域;图4中的(c)是由本章节提出的基于逐像素点和深度卷积神经网络的方法;图4中的(d-i)分别是SW-SVM,SW-SMC,Ncut-SVM,Ncut-SMC,SLIC-SVM,DCNN-SLIC-SMC得到的伪彩色分割结果图,其中深灰色代表分类器分类结果为上皮组织的区域,浅灰色代表分类器分类结果为基质组织的区域,而黑色区域是不被关注的背景区域。Figure 4 shows the pseudo-color results of different models for the segmentation of epithelial and stromal tissues in pathological images. Among them, (a) in Figure 4 is the original pathological image; (b) in Figure 4 is manually labeled by pathologists, in which the dark gray part represents the epithelial tissue, the light gray part represents the stromal tissue, and the black part represents the stromal tissue. Part of it is the background area, that is, the area that is not concerned; (c) in Figure 4 is a method based on pixel-by-pixel and deep convolutional neural network proposed in this chapter; (d-i) in Figure 4 are SW-SVM, The pseudo-color segmentation results obtained by SW-SMC, Ncut-SVM, Ncut-SMC, SLIC-SVM, and DCNN-SLIC-SMC, where the dark gray represents the area where the classifier classification result is epithelial tissue, and the light gray represents the classifier classification result is the area of stromal tissue, and the black area is the background area that is not concerned.
由结果可以看出,撇去背景区域,即专家标注中为黑色的区域,本发明提出的算法和专家标记的结果相似度非常高,具有明显的优势。It can be seen from the results that the algorithm proposed in the present invention has a very high similarity with the result of expert marking, which has obvious advantages, excluding the background area, that is, the black area in the expert marking.
为了定量的表示实验结果,使用了混淆矩阵(confused Matrix)中的衍生参数和ROC曲线来比较实验结果。In order to quantitatively represent the experimental results, the derived parameters in the confused matrix (confused Matrix) and the ROC curve are used to compare the experimental results.
TP表示真阳性,即专家标记为上皮组织,分类器认为是上皮组织的像素点的个数;TP represents true positives, that is, the number of pixels marked by experts as epithelial tissue, which the classifier considers to be epithelial tissue;
FP表示假阳性,即专家标记为基质组织,分类器认为是上皮组织的像素点的个数;FP represents false positives, that is, the number of pixels that experts mark as stromal tissue and the classifier considers to be epithelial tissue;
FN表示假阴性,即专家标记为上皮组织,分类器认为是基质组织的像素点的个数。FN represents false negatives, that is, the number of pixels that the expert marks as epithelial tissue and the classifier considers as stromal tissue.
TN表示真阴性,即专家标记为基质组织,分类器认为是基质组织的像素点的个数;TN means true negative, that is, the number of pixels that experts mark as stromal tissue, and the classifier considers it to be stromal tissue;
混淆矩阵的衍生参数的计算公式如表2所示,真阳性率(TPR),真阴性率(TNR),阳性预测值(PPV)、阴性预测值(NPV),假阳性率(FPR),假阴性率(FNR),伪发现率(FDR),准确性(ACC),F1得分(F1),和马休斯相关系数(MCC)都是通过上述四个混淆矩阵中的参数所衍生出来的评估指标。其中ACC,F1,以及MCC是对模型综合能力评估的指标。The calculation formulas of the derived parameters of the confusion matrix are shown in Table 2, true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR), false Negative Rate (FNR), False Discovery Rate (FDR), Accuracy (ACC), F1 Score (F1), and Matthews Correlation Coefficient (MCC) are all evaluations derived from the parameters in the above four confusion matrices index. Among them, ACC, F1, and MCC are indicators for evaluating the comprehensive ability of the model.
表2混淆矩阵的衍生参数公式Table 2 Derived parameter formula of confusion matrix
下表3表示不同的模型的分割结果的定量评估(%),其中,粗体字即指标中的最优值。Table 3 below shows the quantitative evaluation (%) of the segmentation results of different models, where the bold font is the optimal value in the index.
表3table 3
显示ROC曲线的图被称为“ROC图”。当进行多个学习器的比较时,如果一个学习器的ROC曲线将另一个学习器的曲线完全“包住”,则可以断言前者的性能优于后者;若两个学习器的ROC曲线发生的交叉,则难以断言两者性能孰优孰劣。一个比较合理的判据就比较ROC曲线下面的面积,即AUC(Area Under ROC Curve)。由定义可知,AUC值可以通过对ROC曲线下各部分的面积求和而得。AUC值越大,说明效果越好。图5a展示了几种方法在NKI数据集上分割效果的ROC曲线,图5b展示了几种方法在VGH数据集上分割效果的ROC曲线由AUC值可知,本发明提出的基于逐像素点深度卷积网络的上皮和基质自动分割效果优于基于块的上皮和基质自动分割。A graph showing a ROC curve is called a "ROC graph". When comparing multiple learners, if the ROC curve of one learner completely "wraps" the curve of another learner, it can be asserted that the performance of the former is better than the latter; if the ROC curve of two learners occurs It is difficult to say whether the performance of the two is better or worse. A more reasonable criterion is to compare the area under the ROC curve, that is, AUC (Area Under ROC Curve). It can be seen from the definition that the AUC value can be obtained by summing the area of each part under the ROC curve. The larger the AUC value, the better the effect. Figure 5a shows the ROC curves of the segmentation effects of several methods on the NKI dataset, and Figure 5b shows the ROC curves of the segmentation effects of several methods on the VGH dataset from the AUC value. The automatic epithelium and matrix segmentation of the product network is better than the block-based automatic epithelium and matrix segmentation.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611085781.4A CN106780498A (en) | 2016-11-30 | 2016-11-30 | Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611085781.4A CN106780498A (en) | 2016-11-30 | 2016-11-30 | Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106780498A true CN106780498A (en) | 2017-05-31 |
Family
ID=58914891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611085781.4A Pending CN106780498A (en) | 2016-11-30 | 2016-11-30 | Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780498A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107293289A (en) * | 2017-06-13 | 2017-10-24 | 南京医科大学 | A kind of speech production method that confrontation network is generated based on depth convolution |
CN108197606A (en) * | 2018-01-31 | 2018-06-22 | 浙江大学 | The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution |
CN108447062A (en) * | 2018-02-01 | 2018-08-24 | 浙江大学 | A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern |
CN108492297A (en) * | 2017-12-25 | 2018-09-04 | 重庆理工大学 | The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor |
CN108510485A (en) * | 2018-03-27 | 2018-09-07 | 福州大学 | It is a kind of based on convolutional neural networks without reference image method for evaluating quality |
CN108629768A (en) * | 2018-04-29 | 2018-10-09 | 山东省计算中心(国家超级计算济南中心) | The dividing method of epithelial tissue in a kind of oesophagus pathological image |
CN108647732A (en) * | 2018-05-14 | 2018-10-12 | 北京邮电大学 | A kind of pathological image sorting technique and device based on deep neural network |
CN108766555A (en) * | 2018-04-08 | 2018-11-06 | 深圳大学 | The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy |
CN109003274A (en) * | 2018-07-27 | 2018-12-14 | 广州大学 | A kind of diagnostic method, device and readable storage medium storing program for executing for distinguishing pulmonary tuberculosis and tumour |
CN109325495A (en) * | 2018-09-21 | 2019-02-12 | 南京邮电大学 | A crop image segmentation system and method based on deep neural network modeling |
CN109781732A (en) * | 2019-03-08 | 2019-05-21 | 江西憶源多媒体科技有限公司 | A kind of small analyte detection and the method for differential counting |
CN110110634A (en) * | 2019-04-28 | 2019-08-09 | 南通大学 | Pathological image polychromatophilia color separation method based on deep learning |
CN110598781A (en) * | 2019-09-05 | 2019-12-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110659692A (en) * | 2019-09-26 | 2020-01-07 | 重庆大学 | Pathological image automatic labeling method based on reinforcement learning and deep neural network |
CN110796661A (en) * | 2018-08-01 | 2020-02-14 | 华中科技大学 | Fungal microscopic image segmentation detection method and system based on convolutional neural network |
CN111325103A (en) * | 2020-01-21 | 2020-06-23 | 华南师范大学 | Cell labeling system and method |
CN111798428A (en) * | 2020-07-03 | 2020-10-20 | 南京信息工程大学 | Automatic segmentation method for multiple tissues of skin pathological image |
CN112561916A (en) * | 2020-12-16 | 2021-03-26 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112990214A (en) * | 2021-02-20 | 2021-06-18 | 南京信息工程大学 | Medical image feature recognition prediction model |
CN113052124A (en) * | 2021-04-09 | 2021-06-29 | 济南博观智能科技有限公司 | Identification method and device for fogging scene and computer-readable storage medium |
CN114548179A (en) * | 2022-02-24 | 2022-05-27 | 北京航空航天大学 | Biological tissue identification method and device based on ultrasonic echo time-frequency spectrum features |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150213302A1 (en) * | 2014-01-30 | 2015-07-30 | Case Western Reserve University | Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features |
CN106022384A (en) * | 2016-05-27 | 2016-10-12 | 中国人民解放军信息工程大学 | Image attention semantic target segmentation method based on fMRI visual function data DeconvNet |
-
2016
- 2016-11-30 CN CN201611085781.4A patent/CN106780498A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150213302A1 (en) * | 2014-01-30 | 2015-07-30 | Case Western Reserve University | Automatic Detection Of Mitosis Using Handcrafted And Convolutional Neural Network Features |
CN106022384A (en) * | 2016-05-27 | 2016-10-12 | 中国人民解放军信息工程大学 | Image attention semantic target segmentation method based on fMRI visual function data DeconvNet |
Non-Patent Citations (3)
Title |
---|
HAI SU等: ""region segmentation in histopathological breast cancer images using deep convolutional neural network"", 《IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING》 * |
JUN XU 等: ""A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images"", 《NEUROCOMPUTING》 * |
龚磊等: ""基于多特征描述的乳腺癌肿瘤病理自动分级"", 《计算机应用》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107293289A (en) * | 2017-06-13 | 2017-10-24 | 南京医科大学 | A kind of speech production method that confrontation network is generated based on depth convolution |
CN107293289B (en) * | 2017-06-13 | 2020-05-29 | 南京医科大学 | Speech generation method for generating confrontation network based on deep convolution |
CN108492297A (en) * | 2017-12-25 | 2018-09-04 | 重庆理工大学 | The MRI brain tumors positioning for cascading convolutional network based on depth and dividing method in tumor |
CN108197606A (en) * | 2018-01-31 | 2018-06-22 | 浙江大学 | The recognition methods of abnormal cell in a kind of pathological section based on multiple dimensioned expansion convolution |
CN108447062A (en) * | 2018-02-01 | 2018-08-24 | 浙江大学 | A kind of dividing method of the unconventional cell of pathological section based on multiple dimensioned mixing parted pattern |
CN108447062B (en) * | 2018-02-01 | 2021-04-20 | 浙江大学 | A segmentation method for unconventional cells in pathological slices based on a multi-scale hybrid segmentation model |
CN108510485B (en) * | 2018-03-27 | 2022-04-05 | 福州大学 | A reference-free image quality assessment method based on convolutional neural network |
CN108510485A (en) * | 2018-03-27 | 2018-09-07 | 福州大学 | It is a kind of based on convolutional neural networks without reference image method for evaluating quality |
CN108766555A (en) * | 2018-04-08 | 2018-11-06 | 深圳大学 | The computer diagnosis method and system of Pancreatic Neuroendocrine Tumors grade malignancy |
CN108629768B (en) * | 2018-04-29 | 2022-01-21 | 山东省计算中心(国家超级计算济南中心) | Method for segmenting epithelial tissue in esophageal pathology image |
CN108629768A (en) * | 2018-04-29 | 2018-10-09 | 山东省计算中心(国家超级计算济南中心) | The dividing method of epithelial tissue in a kind of oesophagus pathological image |
CN108647732A (en) * | 2018-05-14 | 2018-10-12 | 北京邮电大学 | A kind of pathological image sorting technique and device based on deep neural network |
CN109003274A (en) * | 2018-07-27 | 2018-12-14 | 广州大学 | A kind of diagnostic method, device and readable storage medium storing program for executing for distinguishing pulmonary tuberculosis and tumour |
CN110796661A (en) * | 2018-08-01 | 2020-02-14 | 华中科技大学 | Fungal microscopic image segmentation detection method and system based on convolutional neural network |
CN110796661B (en) * | 2018-08-01 | 2022-05-31 | 华中科技大学 | Fungal microscopic image segmentation detection method and system based on convolutional neural network |
CN109325495B (en) * | 2018-09-21 | 2022-04-26 | 南京邮电大学 | Crop image segmentation system and method based on deep neural network modeling |
CN109325495A (en) * | 2018-09-21 | 2019-02-12 | 南京邮电大学 | A crop image segmentation system and method based on deep neural network modeling |
CN109781732A (en) * | 2019-03-08 | 2019-05-21 | 江西憶源多媒体科技有限公司 | A kind of small analyte detection and the method for differential counting |
CN110110634A (en) * | 2019-04-28 | 2019-08-09 | 南通大学 | Pathological image polychromatophilia color separation method based on deep learning |
CN110110634B (en) * | 2019-04-28 | 2023-04-07 | 南通大学 | Pathological image multi-staining separation method based on deep learning |
CN110598781A (en) * | 2019-09-05 | 2019-12-20 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110659692A (en) * | 2019-09-26 | 2020-01-07 | 重庆大学 | Pathological image automatic labeling method based on reinforcement learning and deep neural network |
CN111325103B (en) * | 2020-01-21 | 2020-11-03 | 华南师范大学 | Cell labeling system and method |
CN111325103A (en) * | 2020-01-21 | 2020-06-23 | 华南师范大学 | Cell labeling system and method |
CN111798428A (en) * | 2020-07-03 | 2020-10-20 | 南京信息工程大学 | Automatic segmentation method for multiple tissues of skin pathological image |
CN111798428B (en) * | 2020-07-03 | 2023-05-30 | 南京信息工程大学 | A Method for Automatic Segmentation of Multiple Tissues in Skin Pathological Images |
CN112561916A (en) * | 2020-12-16 | 2021-03-26 | 深圳市商汤科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112990214A (en) * | 2021-02-20 | 2021-06-18 | 南京信息工程大学 | Medical image feature recognition prediction model |
CN113052124A (en) * | 2021-04-09 | 2021-06-29 | 济南博观智能科技有限公司 | Identification method and device for fogging scene and computer-readable storage medium |
CN114548179A (en) * | 2022-02-24 | 2022-05-27 | 北京航空航天大学 | Biological tissue identification method and device based on ultrasonic echo time-frequency spectrum features |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780498A (en) | Based on point depth convolutional network epithelium and matrix organization's automatic division method pixel-by-pixel | |
Kashyap | Dilated residual grooming kernel model for breast cancer detection | |
Roy et al. | Patch-based system for classification of breast histology images using deep learning | |
Wan et al. | Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks | |
CN114287878B (en) | A method for diabetic retinopathy lesion image recognition based on attention model | |
Wan et al. | Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement | |
CN110033032B (en) | Tissue slice classification method based on microscopic hyperspectral imaging technology | |
CN106096654A (en) | A kind of cell atypia automatic grading method tactful based on degree of depth study and combination | |
CN104346617B (en) | A kind of cell detection method based on sliding window and depth structure extraction feature | |
CN112215117A (en) | Abnormal cell identification method and system based on cervical cytology image | |
CN110059586B (en) | Iris positioning and segmenting system based on cavity residual error attention structure | |
CN112101451A (en) | Breast cancer histopathology type classification method based on generation of confrontation network screening image blocks | |
US20090161928A1 (en) | System and method for unsupervised detection and gleason grading of prostate cancer whole mounts using nir fluorscence | |
CN105931226A (en) | Automatic cell detection and segmentation method based on deep learning and using adaptive ellipse fitting | |
Pan et al. | Cell detection in pathology and microscopy images with multi-scale fully convolutional neural networks | |
CN109086836A (en) | A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks | |
Dai et al. | TD-Net: Trans-Deformer network for automatic pancreas segmentation | |
CN106056595A (en) | Method for automatically identifying whether thyroid nodule is benign or malignant based on deep convolutional neural network | |
Ju et al. | Classification of jujube defects in small data sets based on transfer learning | |
CN108629369A (en) | A kind of Visible Urine Sediment Components automatic identifying method based on Trimmed SSD | |
Mao et al. | Iteratively training classifiers for circulating tumor cell detection | |
CN113658151B (en) | Breast lesion magnetic resonance image classification method, equipment and readable storage medium | |
Cao et al. | An automatic breast cancer grading method in histopathological images based on pixel-, object-, and semantic-level features | |
Yonekura et al. | Improving the generalization of disease stage classification with deep CNN for glioma histopathological images | |
CN113011340B (en) | Cardiovascular operation index risk classification method and system based on retina image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170531 |
|
WD01 | Invention patent application deemed withdrawn after publication |