[go: up one dir, main page]

CN114037720A - Method and device for pathological image segmentation and classification based on semi-supervised learning - Google Patents

Method and device for pathological image segmentation and classification based on semi-supervised learning Download PDF

Info

Publication number
CN114037720A
CN114037720A CN202111211187.6A CN202111211187A CN114037720A CN 114037720 A CN114037720 A CN 114037720A CN 202111211187 A CN202111211187 A CN 202111211187A CN 114037720 A CN114037720 A CN 114037720A
Authority
CN
China
Prior art keywords
segmentation
image
supervised learning
point
semi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111211187.6A
Other languages
Chinese (zh)
Other versions
CN114037720B (en
Inventor
宋红
朱翊铭
杨健
付天宇
肖德强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111211187.6A priority Critical patent/CN114037720B/en
Publication of CN114037720A publication Critical patent/CN114037720A/en
Application granted granted Critical
Publication of CN114037720B publication Critical patent/CN114037720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/60ICT specially adapted for the handling or processing of medical references relating to pathologies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

基于半监督学习的病理图像分割与分类的方法及装置,有效解决图像的尺度变化问题,更改网络的结构,使其同时执行分割和分类,解决全视野切片中样本的类不平衡问题,提高分类与分割的性能。方法包括:(1)使用基于Swin‑Transformer Blocks的U形网络结构,从图像数据中提取多尺度信息;(2)在上采样中,采用密集连接;(3)利用细胞图像的点注释进行弱监督分割,采用注释点和几何约束的负边界稀疏监督,结合点到区域的空间拓展的Voronoi划分策略来执行粗分割;在精细分割阶段,通过轮廓敏感约束函数,进一步利用未修饰图像中的边缘先验知识来调整核轮廓;(4)在网络中,修改最后的线性映射层,使其能同时输入分割和分类的结果。

Figure 202111211187

The method and device for segmentation and classification of pathological images based on semi-supervised learning can effectively solve the problem of scale change of images, change the structure of the network so that it can perform segmentation and classification at the same time, solve the problem of class imbalance of samples in full-field slices, and improve classification performance with segmentation. Methods include: (1) using a U-shaped network structure based on Swin‑Transformer Blocks to extract multi-scale information from image data; (2) using dense connections in upsampling; (3) using point annotations of cell images for weak Supervised segmentation, using the negative boundary sparse supervision of annotation points and geometric constraints, combined with the Voronoi partition strategy of point-to-region spatial expansion to perform coarse segmentation; in the fine segmentation stage, the edge in the unmodified image is further utilized through the contour-sensitive constraint function The prior knowledge is used to adjust the kernel contour; (4) in the network, the last linear mapping layer is modified so that it can input the results of segmentation and classification at the same time.

Figure 202111211187

Description

Pathological image segmentation and classification method and device based on semi-supervised learning
Technical Field
The invention relates to the technical field of medical image processing, in particular to a pathological image segmentation and classification method based on semi-supervised learning and a cytopathological image segmentation and classification device based on semi-supervised learning.
Background
In pathological Image analysis, images are mainly stored in full-field digital (WSI) slices, traditional pathological slices and high-resolution digital images of mobile phones are scanned by a digital scanner, and then the digital images are seamlessly spliced and integrated by a computer to produce a visible digital Image. Compared with the traditional glass slide, the glass slide has the advantages of easy storage, difficult fading, difficult loss, easy retrieval and the like. When cells in a pathological image are segmented, a time-consuming and labor-consuming work is required when the images are labeled, a large amount of manpower is required for labeling each cell at a pixel level, generally, a finished WSI slice has tens of millions of cell images, currently, few researches on a point supervision segmentation kernel are carried out, and due to the complex segmentation background and multi-scale information in the pathological image, the segmentation and classification performance of the images can be influenced.
The difficulties of pathological image analysis are as follows:
1. the image is large in size, difficult to label data, and contains many different scales of information.
2. When the segmentation is performed, the segmented gold standard is required to be different from a class label, and the acquisition of the segmented gold standard is time-consuming and labor-consuming.
3. The class of images has positive and negative samples with class imbalance problems.
The invention content is as follows:
in order to overcome the defects of the prior art, the invention aims to solve the technical problem of a pathological image segmentation and classification method based on semi-supervision, which can effectively solve the problem of image segmentation under weak annotation, and can change a network structure, simultaneously execute segmentation kernel classification and solve the problem of difficult image annotation.
The technical scheme of the invention is as follows: the pathological image segmentation and classification method based on semi-supervised learning comprises the following steps:
(1) extracting multi-scale information from image data in a self-adaptive mode by using a U-shaped network structure based on Swin-Transformer Blocks;
(2) in the up-sampling process, bilinear interpolation dense connection is used, so that the problems of gradient loss and overfitting are relieved while the fineness loss of a decoder is reduced;
(3) performing weak supervision segmentation by using point annotation of a cell image, adopting negative boundary sparse supervision of annotation points and geometric constraints in a rough segmentation stage, and combining a Voronoi division strategy of space expansion from points to regions; in the fine segmentation stage, the kernel contour is adjusted by further utilizing edge priori knowledge in an unmodified image through a contour sensitive constraint function;
(4) in the network, the results of segmentation and classification are output simultaneously by modifying the final linear mapping layer.
According to the invention, through the Swin-Transformer-based U-shaped network structure, multi-scale information can be effectively extracted from an image, and a dense connection is adopted in the up-sampling process, so that the fineness loss of a decoder can be reduced, and the problems of gradient loss and overfitting can be relieved. And changing the structure of the network so that it can perform classification and segmentation simultaneously; two-stage segmentation is performed using more readily available cell point annotations, and in the first stage, coarse segmentation is performed using a negative boundary sparse supervision of annotation points and geometric constraints in combination with a point-to-region spatially expanded Voronoi partitioning strategy. In the second stage, a contour sensitivity constraint function is provided, and the kernel contour is further adjusted by using the edge priori knowledge in the unmodified image.
There is also provided a pathological image segmentation and classification device based on semi-supervised learning, comprising:
the extraction module is used for enabling a network to adaptively extract multi-scale information from the image based on a U-shaped network structure of Swin-Transformer Blocks;
the up-sampling module adopts a dense connection structure based on bilinear difference values, reduces the fineness loss of a decoder, and relieves the problems of gradient disappearance and overfitting;
the weak supervision module is used for carrying out rough segmentation by using weak annotation of the cell image based on annotation points, geometric constraints and boundary constraints in combination with a Voronoi division strategy from points to regions, and adjusting the nuclear contour by using image edge prior information in a fine segmentation stage through a contour sensitive constraint function;
a result output module configured to change a last full connectivity layer of the network to output both the segmentation results and the classification results.
Drawings
Fig. 1 shows a flowchart of a first method of pathological image segmentation and classification based on semi-supervised learning according to the present invention.
FIG. 2 shows a structure diagram of Swin-Transformer Block employed in the present invention.
Fig. 3 shows hierarchical concatenation (a), residual concatenation (b), and dense concatenation (c), respectively.
Fig. 4 shows a schematic diagram of a bilinear interpolation algorithm.
Fig. 5 shows a deep learning overall network architecture employed by the present invention.
Fig. 6 shows the original drawing and the dot interpretation drawing (a), the Voronoi region constraint drawing (b), and the dot interval drawing (c).
Fig. 7 shows a first stage coarse segmentation flow chart.
Fig. 8 shows the rough segmentation map (b) of the first stage of the original (a) in the second stage, the Kirsch operator edge extraction map (c), the morphological edge extraction map (d), and the sparse region map (e).
Detailed Description
As shown in fig. 1, the method for pathological image segmentation and classification based on semi-supervised learning includes the following steps:
(1) extracting multi-scale information from image data in a self-adaptive mode by using a U-shaped network structure based on Swin-Transformer Blocks;
(2) in the up-sampling process, bilinear interpolation dense connection is used, so that the problems of gradient loss and overfitting are relieved while the fineness loss of a decoder is reduced;
(3) performing weak supervision segmentation by using point annotation of a cell image, adopting negative boundary sparse supervision of annotation points and geometric constraints in a rough segmentation stage, and combining a Voronoi division strategy of space expansion from points to regions; in the fine segmentation stage, the kernel contour is adjusted by further utilizing edge priori knowledge in an unmodified image through a contour sensitive constraint function;
(4) in the network, the results of segmentation and classification are output simultaneously by modifying the final linear mapping layer.
According to the invention, through the Swin-Transformer-based U-shaped network structure, multi-scale information can be effectively extracted from an image, and a dense connection is adopted in the up-sampling process, so that the fineness loss of a decoder can be reduced, and the problems of gradient loss and overfitting can be relieved. And changing the structure of the network so that it can perform classification and segmentation simultaneously; two-stage segmentation is performed using more readily available cell point annotations, and in the first stage, coarse segmentation is performed using a negative boundary sparse supervision of annotation points and geometric constraints in combination with a point-to-region spatially expanded Voronoi partitioning strategy. In the second stage, a contour sensitivity constraint function is provided, and the kernel contour is further adjusted by using the edge priori knowledge in the unmodified image.
Preferably, in the step (1), Swin-Transformer Bolcks perform adaptive feature extraction based on a multi-head attention module and residual connection of a moving window and a multi-layer perceptron, and a calculation mode when Swin-Transformer Bolcks performs feature extraction of the ith layer is formula (1)
Figure BDA0003308999040000041
Wherein:
Figure BDA0003308999040000051
and zlRespectively represent the ith(S) W-MSA of a layer and output of a multi-layer perceptron;
the self-attention mechanism is calculated by equation (2):
Figure BDA0003308999040000052
wherein:
Figure BDA0003308999040000053
representing the query, key and value matrices, M2And d represents the number of patch and dimensions of query, key under a window, respectively, and B represents the confusion matrix
Figure BDA0003308999040000054
A value of (1).
Preferably, in the step (2), in the upsampling process, the problem of gradient loss and overfitting is alleviated while the loss of fineness of the decoder is reduced by adopting a dense connection structure, and in the dense connection process, the upsampling process is formula (3):
Figure BDA0003308999040000055
wherein: f. ofnRepresenting an up-sampling difference method, wherein the method is a bilinear difference method; known function Q in bilinear difference11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1),Q22=(x2,y2) The bilinear interpolation formula for a pixel (x, y) is (4):
Figure BDA0003308999040000056
preferably, in the step (3), the cell image is first point-annotated, two distance maps are generated to focus on the positive pixel and the negative pixel respectively, and the distance maps including the point annotation
Figure BDA0003308999040000057
And edge map
Figure BDA0003308999040000058
Point annotated distance map
Figure BDA0003308999040000059
For focusing on high confidence positive pixels, assuming the annotation point for each kernel is close to the center of the kernel, then the point annotation is expanded by a distance filter to a reliable kernel surveillance area,
Figure BDA00033089990400000510
is calculated by the formula (5), as follows:
Figure BDA00033089990400000511
wherein: m and n are respectively mark points of the distance annotation graph marked by the points, and alpha is a scaling parameter for controlling the distribution proportion;
negative pixels with high confidence are concentrated on Voronoi diagrams and are marked as
Figure BDA0003308999040000061
Obtaining the partition edges by a Voronoi diagram, which can be further enlarged by a fast decreasing response of the distance filter (5);
Figure BDA0003308999040000062
for describing negative pixels with high confidence.
Preferably, in the weakly supervised learning of the step (3):
first, a polar loss function is used to update the Swin-Transformer-Unet parameters, and
Figure BDA0003308999040000063
the output segmentation graph is shown, and the ploar loss function is expressed by equation (6):
Figure BDA0003308999040000064
h (x) performing self-supervision learning on the output segmentation graph by correcting the segmentation graph into a binary image mask;
and simultaneously setting two sparse loss functions for updating parameters in the transform network, and respectively paying attention to partial positive and negative pixels:
Figure BDA0003308999040000065
Figure BDA0003308999040000066
wherein: the (DEG) point operation represents the pixel-by-pixel product, and reliable weight mask is extracted by ReLu operation to carry out sparse loss calculation, so that LpointFocusing only on high-confidence positive pixels, LvoronoiOnly negative pixels of high confidence are of interest.
Preferably, in the step (3), in the first-stage coarse segmentation stage, the initial segmentation graph is used as an expansion of the point label in the initial state; iterating the segmentation model using an expanded point-distance map, the maps being updated by the newly trained model, the point-distance map
Figure BDA0003308999040000067
Updating according to equation (5), wherein the annotation graph P is replaced by the coarse segmentation result of the previous round; this operation is repeated several times to obtain a coarse segmentation result which is denoted as Rcoarse
Preferably, in the step (3), in the first stage fine segmentation stage, a local area map supervision method is used; by extracting the apparent contour of the input image as additional supervision, firstly, the edge image is refined, and the result is recorded as Er
Er=(dilation(Rcoarse,k)-erosion(Rcoarse,k))&EKirsch (9)
Wherein, dilation and erosion are the dilation and erosion morphological operations of the image over k pixels, respectively, EKirschRepresenting the image after the Kirsch operator extracts the edge of the input image.
Preferably, in the second stage of the step (3), sparse supervised learning, in order to implement supplementary boundary supervision, the contour sensitivity loss L is addedcontourTo fine-tune the contour of the kernel, and also to use the map of the local area for supervision
Figure BDA0003308999040000071
Wherein E isrRepresents the edge map of the refinement process and Kirsch represents the Kirsch operator.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, corresponding to the method of the invention, the invention also includes a device for pathological image segmentation and classification based on semi-supervised learning, which is generally expressed in the form of functional modules corresponding to the steps of the method. The device includes:
the extraction module is used for enabling a network to adaptively extract multi-scale information from the image based on a U-shaped network structure of Swin-Transformer Blocks;
the up-sampling module adopts a dense connection structure based on bilinear difference values, reduces the fineness loss of a decoder, and relieves the problems of gradient disappearance and overfitting;
the weak supervision module is used for carrying out rough segmentation by using weak annotation of the cell image based on annotation points, geometric constraints and boundary constraints in combination with a Voronoi division strategy from points to regions, and adjusting the nuclear contour by using image edge prior information in a fine segmentation stage through a contour sensitive constraint function;
a result output module configured to change a last full connectivity layer of the network to output both the segmentation results and the classification results.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (9)

1.基于半监督学习的病理图像分割与分类的方法,其特征在于:其包括以下步骤:1. A method for pathological image segmentation and classification based on semi-supervised learning, characterized in that: it comprises the following steps: (1)使用基于Swin-Transformer Blocks的U形网络结构自适应地从图像数据中提取多尺度信息;(1) Using a U-shaped network structure based on Swin-Transformer Blocks to adaptively extract multi-scale information from image data; (2)在上采样的过程使用双线性插值密集连接,减少解码器的精细度损失的同时缓解梯度消失和过拟合的问题;(2) In the process of upsampling, the dense connection of bilinear interpolation is used to reduce the fineness loss of the decoder and alleviate the problems of gradient disappearance and overfitting; (3)利用细胞图像的点注释进行弱监督分割,粗分割阶段采用注释点和几何约束的负边界稀疏监督,结合点到区域的空间拓展的Voronoi划分策略;在精细分割阶段,通过轮廓敏感约束函数,进一步利用未修饰图像中的边缘先验知识来调整核轮廓;(3) Weakly supervised segmentation is performed using point annotations of cell images. In the coarse segmentation stage, the negative boundary sparse supervision of annotation points and geometric constraints is used, combined with the Voronoi division strategy of point-to-region spatial expansion; in the fine segmentation stage, contour-sensitive constraints are adopted. function, which further utilizes the prior knowledge of edges in the unretouched image to adjust the kernel contour; (4)在网络中,通过修改最后的线性映射层来同时输出分割和分类的结果。(4) In the network, the results of segmentation and classification are output simultaneously by modifying the final linear mapping layer. 2.根据权利要求1所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(1)中,Swin-Transformer Bolcks基于移动窗口的多头注意力模块、残差连接与多层感知机进行自适应特征提取,第i层的Swin-Transformer Bolcks进行特征提取时的计算方式,为公式(1)2. the method for pathological image segmentation and classification based on semi-supervised learning according to claim 1, is characterized in that: in described step (1), Swin-Transformer Bolcks is based on the multi-head attention module of moving window, residual connection With multi-layer perceptron for adaptive feature extraction, the calculation method of Swin-Transformer Bolcks in the i-th layer for feature extraction is formula (1)
Figure FDA0003308999030000011
Figure FDA0003308999030000011
其中:
Figure FDA0003308999030000012
和zl分别代表了第i层的(S)W-MSA和多层感知机的输出;
in:
Figure FDA0003308999030000012
and z l represent the output of the (S)W-MSA and the multilayer perceptron of the i-th layer, respectively;
自注意力机制通过公式(2)进行计算:The self-attention mechanism is calculated by formula (2):
Figure FDA0003308999030000021
Figure FDA0003308999030000021
其中:Q,K,
Figure FDA0003308999030000022
表示query,key和value矩阵,M2和d分别代表了一个窗口下的的patch数量和query,key的维度,B代表了混淆矩阵
Figure FDA0003308999030000023
中的值。
Among them: Q, K,
Figure FDA0003308999030000022
Represents the query, key and value matrix, M 2 and d represent the number of patches in a window and the dimension of query and key, respectively, and B represents the confusion matrix
Figure FDA0003308999030000023
value in .
3.根据权利要求2所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(2)中,在上采样过程中,采用密集连接结构减少解码器的精细度损失的同时缓解梯度消失和过拟合的问题,在密集连接中,上采样过程为公式(3):3. The method for pathological image segmentation and classification based on semi-supervised learning according to claim 2, wherein in the step (2), in the upsampling process, a dense connection structure is adopted to reduce the fineness of the decoder At the same time of loss, the problem of gradient disappearance and overfitting is alleviated. In dense connection, the upsampling process is formula (3):
Figure FDA0003308999030000024
Figure FDA0003308999030000024
其中:fn代表上采样的差值方法,本方法为双线性差值法;双线性差值中已知函数Q11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1),Q22=(x2,y2)四个点的值,对于一个像素点(x,y)双线性插值的公式为(4):Among them: f n represents the difference method of up-sampling, this method is the bilinear difference method; in the bilinear difference, the known functions Q 11 =(x 1 , y 1 ), Q 12 =(x 1 , y ) 2 ), Q 21 =(x 2 , y 1 ), Q 22 =(x 2 , y 2 ) values of four points, the formula for bilinear interpolation of one pixel point (x, y) is (4):
Figure FDA0003308999030000025
Figure FDA0003308999030000025
4.根据权利要求3所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(3)中首先对细胞图像进行点注释,分别生成两个距离图分别聚焦到正像素与负像素,包括点注释的距离图
Figure FDA0003308999030000026
与边缘图
Figure FDA0003308999030000027
4. The method for pathological image segmentation and classification based on semi-supervised learning according to claim 3, characterized in that: in the step (3), point annotation is first performed on the cell image, and two distance maps are respectively generated to focus on Distance maps for positive and negative pixels, including point annotations
Figure FDA0003308999030000026
with edge graph
Figure FDA0003308999030000027
点注释的距离图
Figure FDA0003308999030000028
用来专注于高置信度的正像素,假设每个核的注释点靠近核的中心,之后通过距离滤波器将点标注扩大到可靠的核监督区域,
Figure FDA0003308999030000029
中的每一个元素通过(5)式计算,如下所示:
Distance map for point annotations
Figure FDA0003308999030000028
is used to focus on high-confidence positive pixels, assuming that the annotation points of each kernel are close to the center of the kernel, and then expand the point annotations to reliable kernel supervision regions through distance filters,
Figure FDA0003308999030000029
Each element in is calculated by equation (5) as follows:
Figure FDA00033089990300000210
Figure FDA00033089990300000210
其中:m,n分别为点标注的距离注释图的标记点,α为控制分布比例的缩放参数;Among them: m, n are the marked points of the distance annotation map of the point labeling, and α is the scaling parameter that controls the distribution scale; 以Voronoi图来专注于高置信度的负像素,记为
Figure FDA0003308999030000031
通过Voronoi图来获得分区边缘,这些边缘能够通过距离滤波器(5)式的快速减小的响应进一步扩大;
Figure FDA0003308999030000032
用于描述置信度高的负像素。
Use Voronoi diagram to focus on high-confidence negative pixels, denoted as
Figure FDA0003308999030000031
The partition edges are obtained by Voronoi diagrams, which can be further enlarged by the rapidly decreasing response of the distance filter (5);
Figure FDA0003308999030000032
Used to describe negative pixels with high confidence.
5.根据权利要求4所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(3)的弱监督学习中:5. the method for pathological image segmentation and classification based on semi-supervised learning according to claim 4, is characterized in that: in the weakly supervised learning of described step (3): 首先,采用polar损失函数用来进行Swin-Transformer-Unet的参数更新,用
Figure FDA0003308999030000037
表示输出的分割图,ploar损失函数为式(6):
First, the polar loss function is used to update the parameters of Swin-Transformer-Unet, using
Figure FDA0003308999030000037
Represents the segmentation map of the output, and the ploar loss function is formula (6):
Figure FDA0003308999030000033
Figure FDA0003308999030000033
其中,H(x)将输出的分割图通过修正为二值化图像mask进行自监督学习;Among them, H(x) performs self-supervised learning by modifying the output segmentation map into a binarized image mask; 同时设置两个稀疏的损失函数,用来更新Transformer网络中的参数,分别关注部分正负像素:At the same time, two sparse loss functions are set to update the parameters in the Transformer network, focusing on some positive and negative pixels:
Figure FDA0003308999030000034
Figure FDA0003308999030000034
Figure FDA0003308999030000035
Figure FDA0003308999030000035
其中:(·)点运算代表逐像素乘积,ReLu操作提取可靠的权重mask进行稀疏损失计算,让Lpoint只关注高置信度的正像素,Lvoronoi只关注高置信度的负像素。Among them: ( ) point operation represents pixel-by-pixel product, ReLu operation extracts a reliable weight mask for sparse loss calculation, let L point only focus on positive pixels with high confidence, and L voronoi only focus on negative pixels with high confidence.
6.根据权要利求5所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(3)中,在第一阶段粗分割阶段,通过初始分割图作为初始状态下点标注的拓展;使用拓展的点距离图对分割模型进行迭代,这些图由最新训练的模型进行更新,点距离图
Figure FDA0003308999030000036
根据式(5)进行更新,其中注释图P被替换为前一轮的粗分割结果;该操作重复若干次,得到粗分割结果记为Rcoarse
6. The method for pathological image segmentation and classification based on semi-supervised learning according to claim 5, characterized in that: in the step (3), in the first stage rough segmentation stage, the initial segmentation map is used as the initial Extension of point labeling in state; segmentation model is iterated using extended point distance maps, which are updated by the newly trained model, point distance maps
Figure FDA0003308999030000036
Update according to formula (5), in which the annotation map P is replaced with the coarse segmentation result of the previous round; this operation is repeated several times, and the coarse segmentation result obtained is recorded as R coarse .
7.根据权利要求6所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(3)中,在第一阶段精分割阶段,使用了局部区域图监督方法;通过提取输入图像的表观轮廓作为额外的监督,首先对边缘图进行精细化处理结果记为Er7. The method for pathological image segmentation and classification based on semi-supervised learning according to claim 6, characterized in that: in the step (3), in the first stage fine segmentation stage, a local area map supervision method is used; By extracting the apparent contour of the input image as additional supervision, the edge map is first refined and the result is denoted as E r : Er=(dilation(Rcoarse,k)-erosion(Rcoarse,k))&EKirsch (9)E r = (dilation(R coarse , k)-erosion(R coarse , k))&E Kirsch (9) 其中,dilation和erosion分别是图像在k个像素上的膨胀与腐蚀形态学操作,EKirsch代表Kirsch算子对输入图像边缘提取后的图像。Among them, dilation and erosion are the dilation and erosion morphological operations of the image on k pixels, respectively, and E Kirsch represents the image after the Kirsch operator extracts the edge of the input image. 8.根据权利要求7所述的基于半监督学习的病理图像分割与分类的方法,其特征在于:所述步骤(3)中的第二阶段,稀疏监督学习中,为了实现补充边界监督,附加轮廓敏感损失Lcontour来微调核轮廓,同样,使用局部区域图进行监督8. The method for pathological image segmentation and classification based on semi-supervised learning according to claim 7, wherein in the second stage of the step (3), in the sparse supervised learning, in order to realize supplementary boundary supervision, additional A contour-sensitive loss L contour to fine-tune the kernel contour, again, using local area maps for supervision
Figure FDA0003308999030000041
Figure FDA0003308999030000041
其中,Er代表精细化处理的边缘图,Kirsch代表kirsch算子。Among them, Er represents the refined edge map, and Kirsch represents the kirsch operator.
9.基于半监督学习的病理图像分割与分类的装置,其特征在于:其包括:9. A device for pathological image segmentation and classification based on semi-supervised learning, characterized in that: it comprises: 提取模块,基于Swin-Transformer Blocks的U形网络结构,使网络自适应从图像中提取多尺度信息;The extraction module, based on the U-shaped network structure of Swin-Transformer Blocks, enables the network to adaptively extract multi-scale information from the image; 上采样模块,采用基于双线性差值的密集连接结构,减少解码器精细度损失,缓解梯度消失以及过拟合的问题;The upsampling module adopts a dense connection structure based on bilinear difference to reduce the loss of decoder precision and alleviate the problems of gradient disappearance and overfitting; 弱监督模块,利用细胞图像的弱注释采用了基于注释点与几何约束,边界约束,结合点到区域的Voronoi划分策略进行粗分割,精分割阶段利用图像边缘先验信息,通过轮廓敏感约束函数,进行核轮廓的调整;The weak supervision module uses the weak annotation of the cell image to use the Voronoi division strategy based on annotation points and geometric constraints, boundary constraints, combined with points to regions to perform rough segmentation, and the fine segmentation stage uses the image edge prior information, through the contour sensitive constraint function, Carry out the adjustment of the nuclear contour; 结果输出模块,其配置来改变网络最后的全连接层,使其既输出分割结果,也输出分类结果。The result output module, which is configured to change the last fully connected layer of the network to output both segmentation and classification results.
CN202111211187.6A 2021-10-18 2021-10-18 Method and device for pathological image segmentation and classification based on semi-supervised learning Active CN114037720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111211187.6A CN114037720B (en) 2021-10-18 2021-10-18 Method and device for pathological image segmentation and classification based on semi-supervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111211187.6A CN114037720B (en) 2021-10-18 2021-10-18 Method and device for pathological image segmentation and classification based on semi-supervised learning

Publications (2)

Publication Number Publication Date
CN114037720A true CN114037720A (en) 2022-02-11
CN114037720B CN114037720B (en) 2025-07-04

Family

ID=80135431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111211187.6A Active CN114037720B (en) 2021-10-18 2021-10-18 Method and device for pathological image segmentation and classification based on semi-supervised learning

Country Status (1)

Country Link
CN (1) CN114037720B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129428A (en) * 2023-02-28 2023-05-16 成都华西精准医学产业技术研究院有限公司 Cell instance segmentation model training method, cell instance segmentation method and system thereof
CN116150617A (en) * 2022-12-16 2023-05-23 上海药明康德新药开发有限公司 Tumor infiltration lymphocyte identification method and system
CN117216002A (en) * 2023-08-30 2023-12-12 广州金域医学检验中心有限公司 Intelligent pathological resource archiving method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN110837836A (en) * 2019-11-05 2020-02-25 中国科学技术大学 Semi-supervised semantic segmentation method based on maximized confidence
CN111986150A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Interactive marking refinement method for digital pathological image
CN113379764A (en) * 2021-06-02 2021-09-10 厦门理工学院 Pathological image segmentation method based on domain confrontation self-supervision learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205758A1 (en) * 2016-12-30 2019-07-04 Konica Minolta Laboratory U.S.A., Inc. Gland segmentation with deeply-supervised multi-level deconvolution networks
CN110837836A (en) * 2019-11-05 2020-02-25 中国科学技术大学 Semi-supervised semantic segmentation method based on maximized confidence
CN111986150A (en) * 2020-07-17 2020-11-24 万达信息股份有限公司 Interactive marking refinement method for digital pathological image
CN113379764A (en) * 2021-06-02 2021-09-10 厦门理工学院 Pathological image segmentation method based on domain confrontation self-supervision learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116150617A (en) * 2022-12-16 2023-05-23 上海药明康德新药开发有限公司 Tumor infiltration lymphocyte identification method and system
CN116150617B (en) * 2022-12-16 2024-04-12 上海药明康德新药开发有限公司 Tumor infiltration lymphocyte identification method and system
CN116129428A (en) * 2023-02-28 2023-05-16 成都华西精准医学产业技术研究院有限公司 Cell instance segmentation model training method, cell instance segmentation method and system thereof
CN117216002A (en) * 2023-08-30 2023-12-12 广州金域医学检验中心有限公司 Intelligent pathological resource archiving method and device, electronic equipment and storage medium
CN117216002B (en) * 2023-08-30 2024-04-09 太原金域临床检验所有限公司 Intelligent pathological resource archiving method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114037720B (en) 2025-07-04

Similar Documents

Publication Publication Date Title
CN105551036B (en) A kind of training method and device of deep learning network
CN114037720B (en) Method and device for pathological image segmentation and classification based on semi-supervised learning
CN110400323B (en) Automatic cutout system, method and device
CN110781775B (en) Remote sensing image water body information accurate segmentation method supported by multi-scale features
WO2020056791A1 (en) Method and apparatus for super-resolution reconstruction of multi-scale dilated convolution neural network
CN110472676A (en) Stomach morning cancerous tissue image classification system based on deep neural network
CN111709929B (en) Lung canceration region segmentation and classification detection system
CN111738318A (en) A Large Image Classification Method Based on Graph Neural Network
CN113076871A (en) Fish shoal automatic detection method based on target shielding compensation
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN109636722B (en) Method for reconstructing super-resolution of online dictionary learning based on sparse representation
CN116645592B (en) A crack detection method and storage medium based on image processing
CN112396036B (en) An Occluded Person Re-Identification Method Combining Spatial Transformation Network and Multi-Scale Feature Extraction
CN110866938A (en) Full-automatic video moving object segmentation method
CN114519717A (en) Image processing method and device, computer equipment and storage medium
CN115423802A (en) Automatic classification and segmentation method of squamous epithelial tumor cell images based on deep learning
CN108446588A (en) A kind of double phase remote sensing image variation detection methods and system
CN115908363B (en) Tumor cell statistics method, device, equipment and storage medium
CN114359739B (en) Target identification method and device
CN115937704A (en) Remote sensing image road segmentation method based on topology perception neural network
CN110348339B (en) Method for extracting handwritten document text lines based on case segmentation
CN113065547A (en) A Weakly Supervised Text Detection Method Based on Character Supervision Information
CN111144422A (en) Positioning identification method and system for aircraft component
CN110100263B (en) Image reconstruction method and device
CN116363103A (en) Tumor interstitial ratio calculation method and terminal for full-field digital pathological section

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant