[go: up one dir, main page]

CN115294157A - Pathological image processing method, model and equipment - Google Patents

Pathological image processing method, model and equipment Download PDF

Info

Publication number
CN115294157A
CN115294157A CN202210960232.6A CN202210960232A CN115294157A CN 115294157 A CN115294157 A CN 115294157A CN 202210960232 A CN202210960232 A CN 202210960232A CN 115294157 A CN115294157 A CN 115294157A
Authority
CN
China
Prior art keywords
pathological image
model
image processing
layer
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210960232.6A
Other languages
Chinese (zh)
Inventor
王宇光
沈逸卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN202210960232.6A priority Critical patent/CN115294157A/en
Publication of CN115294157A publication Critical patent/CN115294157A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30028Colon; Small intestine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30092Stomach; Gastric

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pathological image processing method, which comprises the steps of constructing a cell map in a received pathological image, wherein the cell map comprises cell structure information; inputting the cytogram into a trained pathological image processing model to obtain a fusion characteristic H O For predicting expression of gene markers. The pathological image processing model integrates a atlas convolution neural network GNN model and a convolution neural network CNN model, and the GNN model is used for extracting low-dimensional features H from the cytogram G The CNN model is used for extracting low-dimensional features H of an image layer from the pathological image I Characteristic H G And H I The fusion characteristic is obtained after splicing of one fusion layer O

Description

一种病理图像处理方法、模型和设备A pathological image processing method, model and device

技术领域technical field

本发明属于生物和病理领域,特别涉及一种病理图像处理方法、模型和设备。The invention belongs to the field of biology and pathology, and in particular relates to a pathological image processing method, model and equipment.

背景技术Background technique

在生物学和病理学上,细胞形态学中的结构信息,比如,细胞的聚集情况,细胞之间的距离等,均体现了基因标记物(如结直肠癌的微卫星不稳定性)的表达。但是,关于如何利用该特点用于基因标记物预测,却鲜有提及。In biology and pathology, structural information in cell morphology, such as cell aggregation, distance between cells, etc., reflect the expression of genetic markers (such as microsatellite instability in colorectal cancer) . However, there is little mention about how to use this feature for gene marker prediction.

发明内容Contents of the invention

本发明实施例之一,一种病理图像处理方法,在接收到的病理图像中构建细胞图,该细胞图包含有细胞结构信息;One of the embodiments of the present invention is a pathological image processing method, which constructs a cell map in the received pathological image, and the cell map contains cell structure information;

将所述细胞图输入经过训练的病理图像处理模型,获得融合特征为HO用于预测基因标记物的表达。The cell map is input into the trained pathological image processing model, and the fusion feature is obtained as HO for predicting the expression of gene markers.

本发明实施例显式地以图的方式提取了病理图像中的细胞结构信息,利用图卷积神经网络从图中提取几何信息特征。该特征与卷积神经网络从病理图像中提取的特征融合,用于预测基因标记物的表达。The embodiment of the present invention explicitly extracts the cell structure information in the pathological image in the form of a graph, and uses a graph convolutional neural network to extract geometric information features from the graph. This feature was fused with features extracted from pathological images by a convolutional neural network to predict the expression of gene markers.

本发明提出了图卷积和卷积神经网络的融合模型,用于病理图像的基因标记物预测。在3个病理数据集,即结直肠癌病理图像预测微卫星不稳定性、胃癌预测微卫星不稳定性(MSI)和胃癌预测细胞程序性死亡-配体1(PD-L1)表达病理数据集,均取得了显著的实验提升,分类的准确性比只采用卷积神经网络的病理图像处理模型有了明显提升。The present invention proposes a fusion model of graph convolution and convolutional neural network for gene marker prediction of pathological images. In 3 pathological datasets, namely colorectal cancer pathological images predicting microsatellite instability, gastric cancer predicting microsatellite instability (MSI) and gastric cancer predicting programmed cell death-ligand 1 (PD-L1) expression pathology dataset , have achieved significant experimental improvements, and the accuracy of classification has been significantly improved compared with the pathological image processing model that only uses convolutional neural networks.

附图说明Description of drawings

通过参考附图阅读下文的详细描述,本发明示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本发明的若干实施方式,其中:The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily understood by reading the following detailed description with reference to the accompanying drawings. In the drawings, several embodiments of the invention are shown by way of illustration and not limitation, in which:

图1根据本发明实施例之一的病理图像处理模型结构示意图。Fig. 1 is a schematic structural diagram of a pathological image processing model according to one embodiment of the present invention.

具体实施方式Detailed ways

发明要解决的问题是,如何从组织病理学图像中利用图卷积神经网络提取细胞结构化信息协助卷积神经网络用于基因标记物预测。The problem to be solved by the invention is how to use the graph convolutional neural network to extract cell structural information from histopathological images to assist the convolutional neural network in gene marker prediction.

根据一个或者多个实施例,一种病理图像处理方法,包含两个阶段,第一阶段从病理图像中提取结构化的图信息,第二阶段提供一种图和图像融合模型(即图卷积和卷积神经网络的融合模型)。According to one or more embodiments, a pathological image processing method includes two stages, the first stage extracts structured graph information from pathological images, and the second stage provides a graph and image fusion model (ie graph convolution and convolutional neural network fusion model).

第一阶段,从病理图像中构建细胞图:利用一个预训练好的细胞核分割神经网络(比如UNet,TransUNet等)从组织病理图像图像中分割出细胞核区域,每一个分割出的细胞核区域构成了图的一个节点,记做vj,其中下标j是对细胞核的一个编号。The first stage is to construct a cell map from the pathological image: use a pre-trained nucleus segmentation neural network (such as UNet, TransUNet, etc.) to segment the nucleus region from the histopathological image image, and each segmented nucleus region constitutes a map A node of , denoted as v j , where the subscript j is a number for the nucleus.

接下来,根据细胞核区域重心之间的距离计算图节点之间边的权重。图节点vj和vi之间的边的权重wij的计算公式如下:Next, the weights of edges between graph nodes are calculated based on the distance between the centroids of the nucleus regions. The weight w ij of the edge between graph nodes v j and v i is calculated as follows:

Figure BDA0003792742570000021
Figure BDA0003792742570000021

其中,d(vi,vj)代表了细胞核vj和vi重心之间的距离,dc代表临界距离,决定了最大的细胞距离作用范围,是一个超参数,这个超参数由病理图像的放大倍率决定。这里的距离以像素作为单位,可以需要根据图像的倍率进行换算。例如,当实际物理距离是50微米。根据不同的放大倍率进行换算。如20x放大率下(0.5微米/像素)dc取值为40个像素。Among them, d(v i , v j ) represents the distance between the cell nucleus v j and the center of gravity of v i , d c represents the critical distance, which determines the maximum range of cell distance, and is a hyperparameter, which is determined by pathological images depends on the magnification. The distance here is in pixels, which may need to be converted according to the magnification of the image. For example, when the actual physical distance is 50 microns. Conversions are made for different magnifications. For example, at 20x magnification (0.5 μm/pixel), the value of dc is 40 pixels.

当细胞重心见的距离小于dc,则权重为

Figure BDA0003792742570000022
否则权重为0,即这两个节点之间没有链接的边。每个图节点vj的节点特征由这块细胞核区域的预定义的radiomics特征确定,取决于下游任务调整radiomics特征的需要。在这个阶段,构造的图刻画了病理图像中细胞和细胞之间的形态学几何结构,用于下一个阶段图卷积神经网络的输入。When the distance between the center of gravity of the cell is less than d c , the weight is
Figure BDA0003792742570000022
Otherwise the weight is 0, i.e. there is no link between these two nodes. The node characteristics of each graph node v j are determined by the predefined radiomics characteristics of this nuclear region, depending on the needs of downstream tasks to adjust the radiomics characteristics. At this stage, the constructed graph characterizes the morphological geometry of cells and between cells in the pathological image, which is used as the input of the graph convolutional neural network in the next stage.

在第二阶段,图卷积神经网络(GNN)和卷积神经网络(CNN)的融合模型。首先,训练一个GNN模型(模型的结构没有限制,诸如但是不限于GCN,GIN)用于从第一阶段构建的细胞图中提取一个刻画了病理图像几何结构的低维度特征HG,其中,下标G代表这个特征是刻画了病理图像的几何信息。即,GNN模型以细胞图作为输入,经过图卷积操作后得到一维的特征输出HG。同时,训练一个CNN模型(模型的结构没有限制,诸如但是不限于DenseNet,ResNet)用于从病理图像中提取图像层面的低维度特征HI,其中下标I代表这个特征是刻画 图像层面信息的,即CNN模型以病理图像作为输入,经过卷积操作后得到一维的特征输出HIIn the second stage, a fusion model of graph convolutional neural network (GNN) and convolutional neural network (CNN). First, train a GNN model (the structure of the model is not limited, such as but not limited to GCN, GIN) to extract a low-dimensional feature H G that describes the geometric structure of the pathological image from the cell map constructed in the first stage, where the following The mark G represents that this feature describes the geometric information of the pathological image. That is, the GNN model takes the cell graph as input, and obtains a one-dimensional feature output H G after the graph convolution operation. At the same time, train a CNN model (the structure of the model is not limited, such as but not limited to DenseNet, ResNet) is used to extract the low-dimensional feature H I of the image level from the pathological image, where the subscript I represents that this feature describes the information of the image level , that is, the CNN model takes the pathological image as input, and obtains a one-dimensional feature output H I after the convolution operation.

因为HG和HI分别是从病理图像提取的细胞图和病理图像本身通过GNN和CNN分别提取的,所以刻画了不同的特征,具有互补性。因此先将两部分低维特征拼接,随后通过一个融合层将这个拼接后的特征进一步的融合用于下游的基因表达任务的预测。其中,融合层包含两种策略:可学习的MLP层或者可学习的Transformer层。记录HG和HI拼接后得到的融合特征为HO,其中下标O代表这个特征是融合特征。整体的模型结构如图1所示。Because HG and HI are extracted from the cell map extracted from the pathological image and the pathological image itself is extracted by GNN and CNN respectively, they describe different features and are complementary. Therefore, the two parts of low-dimensional features are spliced first, and then the spliced features are further fused through a fusion layer for the prediction of downstream gene expression tasks . Among them, the fusion layer contains two strategies: a learnable MLP layer or a learnable Transformer layer. The fusion feature obtained after the splicing of H G and HI is recorded as HO , where the subscript O indicates that this feature is a fusion feature. The overall model structure is shown in Figure 1.

基于MLP的融合层模型:MLP融合策略采用多个MLP单元MLPBlock的迭代,得到融合特征HO的表达:MLP-based fusion layer model: The MLP fusion strategy uses the iteration of multiple MLP units MLPBlock to obtain the expression of the fusion feature HO:

HO=MLPBlock(...(MLPBlock(HC))) (2)H O =MLPBlock(...(MLPBlock(H C ))) (2)

其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作。Wherein, H C is a feature after H G and H I are linked, that is, H C =concat[H I , H G ], and concat is a link operation.

MLPBlock的表达如下,The expression of MLPBlock is as follows,

MLPBlock(H)=Dropout(ReLU(Linear(H))) (3)MLPBlock(H)=Dropout(ReLU(Linear(H))) (3)

其中,Linear是一个可学习的线性层,ReLU为ReLU激活函数,Dropout为Dropout正则化层。Among them, Linear is a learnable linear layer, ReLU is the ReLU activation function, and Dropout is the Dropout regularization layer.

基于Transformer的融合层模型:基于Transformer的融合策略表达如下:Transformer-based fusion layer model: Transformer-based fusion strategy is expressed as follows:

HO=Pooling(TransBlock(...(TransBlock(HC)))) (4)H O =Pooling(TransBlock(...(TransBlock(H C )))) (4)

其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作。Wherein, H C is a feature after H G and H I are linked, that is, H C =concat[H I , H G ], and concat is a link operation.

Pooling为池化层,TransBlock是transformer的基本单位,表达如下:Pooling is the pooling layer, and TransBlock is the basic unit of transformer, expressed as follows:

TransBlock(H)=ResidualPreNorm(MLPBlock,ResidualPreNorm(MHSA,H)) (5)TransBlock(H) = ResidualPreNorm(MLPBlock, ResidualPreNorm(MHSA, H)) (5)

MHSA是多头的自注意力机制层(Multi-headed self attention),ResidualPreNorm是残差预先LayerNorm层。MLPBlock的定义同前述。MHSA is a multi-headed self-attention mechanism layer (Multi-headed self attention), and ResidualPreNorm is a residual pre-LayerNorm layer. The definition of MLPBlock is the same as above.

本发明的实验对比结果。Experimental comparison results of the present invention.

本实验利用公开的TCGA病理图像数据集的子集,构建了结直肠癌病理图像预测微卫星不稳定性(阴性vs阳性),胃癌预测微卫星不稳定性(阴性vs阳性),以及私有收集了胃癌数据预测PDL1表达。本发明的分类的准确性比只用卷积神经网络(包括ResNet,DenseNet和MobileNetV3)提升了6.0%以上的AUC。In this experiment, using a subset of the public TCGA pathological image dataset, we constructed colorectal cancer pathological images to predict microsatellite instability (negative vs positive), gastric cancer to predict microsatellite instability (negative vs positive), and a private collection of gastric cancer Data predict PDL1 expression. The accuracy of the classification of the present invention improves the AUC by more than 6.0% compared with only the convolutional neural network (including ResNet, DenseNet and MobileNetV3).

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.

集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of software products, and the computer software products are stored in a storage medium In, several instructions are included to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the protection scope of the present invention is not limited thereto. Any person familiar with the technical field can easily think of various equivalents within the technical scope disclosed in the present invention. Modifications or replacements shall all fall within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (8)

1.一种病理图像处理方法,其特征在于,包括步骤,1. A pathological image processing method, characterized in that, comprising steps, 在接收到的病理图像中构建细胞图,该细胞图包含有细胞结构信息;Construct a cell map in the received pathological image, which contains cell structure information; 将所述细胞图输入经过训练的病理图像处理模型,获得融合特征为HO,用于预测基因标记物的表达。The cell map was input into a trained pathological image processing model, and the fusion feature was obtained as HO , which was used to predict the expression of gene markers. 2.根据权利要求1所述的病理图像处理方法,其特征在于,所述构建细胞图的方法,包括,2. The pathological image processing method according to claim 1, wherein the method for constructing a cell map comprises, 利用一个预训练的细胞核分割模型从病理图像中分割出细胞核区域,每一个分割出的细胞核区域构成了细胞图的一个节点,记做vj,其中,下标j是对一张病理图像中细胞核的一个编号;Use a pre-trained nucleus segmentation model to segment the nucleus region from the pathological image, and each segmented nucleus region constitutes a node of the cell map, denoted as v j , where the subscript j is the nucleus in a pathological image a number of 根据细胞核区域重心之间的距离计算图节点之间边的权重,图中任意两个节点vj和vi之间的边的权重wij的计算公式(1)如下:Calculate the weight of the edge between the nodes of the graph according to the distance between the centers of gravity of the cell nucleus area. The calculation formula (1) of the weight w ij of the edge between any two nodes v j and v i in the graph is as follows:
Figure FDA0003792742560000011
Figure FDA0003792742560000011
其中,d(vi,vj)代表了细胞核vj和vi重心之间的2维欧几里得距离,dc代表临界距离,决定了最大的细胞距离作用范围,由具体的数据集和病理图像的放大倍率决定,Among them, d(v i , v j ) represents the 2-dimensional Euclidean distance between the nucleus v j and the center of gravity of v i , and d c represents the critical distance, which determines the maximum range of cell distance, determined by the specific data set and pathological image magnification decision, 当细胞核重心间的距离小于dc,则权重wij
Figure FDA0003792742560000012
否则权重wij为0,即这两个节点之间没有链接的边。
When the distance between the centers of gravity of the cell nuclei is less than d c , the weight w ij is
Figure FDA0003792742560000012
Otherwise the weight w ij is 0, i.e. there is no linking edge between these two nodes.
3.根据权利要求1所述的病理图像处理方法,其特征在于,病理图像处理模型融合了图卷积神经网络GNN模型和卷积神经网络CNN模型,3. pathological image processing method according to claim 1, is characterized in that, pathological image processing model has merged graph convolutional neural network GNN model and convolutional neural network CNN model, GNN模型用于从所述细胞图中提取低维度特征HGA GNN model is used to extract low-dimensional features HG from the cell map, CNN模型用于从所述病理图像中提取图像层面的低维度特征HIThe CNN model is used to extract the low-dimensional feature H I of the image level from the pathological image, 特征HG和HI经过一个融合层的拼接后得到融合特征为HOAfter the features H G and HI are spliced by a fusion layer, the fusion feature is HO . 4.根据权利要求3所述的病理图像处理方法,其特征在于,所述融合层基于MLP融合策略,采用多个MLP单元MLPBlock的迭代,得到融合特征HO的表达,4. pathological image processing method according to claim 3, is characterized in that, described fusion layer is based on MLP fusion strategy, adopts the iteration of a plurality of MLP units MLPBlock , obtains the expression of fusion feature H0, HO=MLPBlock(...(MLPBlock(HC))) (2)H O =MLPBlock(...(MLPBlock(H C ))) (2) 其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作,Among them, H C is the feature after H G and H I are linked, that is, H C = concat[H I , H G ], concat is a link operation, MLPBlock的表达如下,The expression of MLPBlock is as follows, MLPBlock(H)=Dropout(ReLU(Linear(H))) (3)MLPBlock(H)=Dropout(ReLU(Linear(H))) (3) 其中,Linear是一个可学习的线性层,ReLU为ReLU激活函数,Dropout为Dropout正则化层。Among them, Linear is a learnable linear layer, ReLU is the ReLU activation function, and Dropout is the Dropout regularization layer. 5.根据权利要求3所述的病理图像处理方法,其特征在于,所述融合层基于Transformer融合策略,其基于Transformer的融合策略表达如下,5. pathological image processing method according to claim 3, is characterized in that, described fusion layer is based on Transformer fusion strategy, and its fusion strategy based on Transformer is expressed as follows, HO=Pooling(TransBlock(…(TransBlock(HC)))) (4)H O =Pooling(TransBlock(...(TransBlock(H C )))) (4) 其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作,Pooling为池化层,TransBlock是transformer的基本单位,表达如下:Among them, H C is the feature after H G and H I are linked, that is, H C = concat[H I , H G ], concat is the link operation, Pooling is the pooling layer, and TransBlock is the basic unit of the transformer, expressed as follows: TransBlock(H)=ResidualPreNorm(MLPBlock,ResidualPreNorm(MHSA,H)) (5)TransBlock(H)=ResidualPreNorm(MLPBlock, ResidualPreNorm(MHSA,H)) (5) MHSA是多头的自注意力机制层(Multi-headed self-attention),ResidualPreNorm是残差预先Layer Norm层,即Layer Norm层放置于之前。MHSA is the multi-headed self-attention mechanism layer (Multi-headed self-attention), and ResidualPreNorm is the residual pre-Layer Norm layer, that is, the Layer Norm layer is placed before. 6.一种病理图像处理模型,其特征在于,该病理图像处理模型融合了图卷积神经网络GNN模型和卷积神经网络CNN模型,6. A pathological image processing model, characterized in that, the pathological image processing model has merged a graph convolutional neural network GNN model and a convolutional neural network CNN model, GNN模型用于从病理图像的细胞图中提取低维度特征HGThe GNN model is used to extract low-dimensional features H G from the cytogram of pathological images, CNN模型用于从病理图像中提取图像层面的低维度特征HIThe CNN model is used to extract low-dimensional features H I at the image level from pathological images, 特征HG和HI经过一个融合层的拼接后得到融合特征为HOAfter the features H G and HI are spliced by a fusion layer, the fusion feature is HO . 7.一种病理图像处理设备,其特征在于,所述设备包括存储器;以及7. A pathological image processing device, characterized in that the device includes a memory; and 耦合到所述存储器的处理器,该处理器被配置为执行存储在所述存储器中的指令,所述处理器执行以下操作:a processor coupled to the memory, the processor configured to execute instructions stored in the memory, the processor performing the following operations: 在接收到的病理图像中构建细胞图,该细胞图包含有细胞结构信息;Construct a cell map in the received pathological image, which contains cell structure information; 将所述细胞图输入经过训练的病理图像处理模型,获得融合特征为HO用于预测基因标记物的表达。The cell map was input into the trained pathological image processing model, and the fusion feature was obtained as HO for predicting the expression of gene markers. 8.一种存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时,实现如权利要求1至5中任一所述的方法。8. A storage medium on which a computer program is stored, wherein when the program is executed by a processor, the method according to any one of claims 1 to 5 is implemented.
CN202210960232.6A 2022-08-11 2022-08-11 Pathological image processing method, model and equipment Pending CN115294157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210960232.6A CN115294157A (en) 2022-08-11 2022-08-11 Pathological image processing method, model and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210960232.6A CN115294157A (en) 2022-08-11 2022-08-11 Pathological image processing method, model and equipment

Publications (1)

Publication Number Publication Date
CN115294157A true CN115294157A (en) 2022-11-04

Family

ID=83829044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210960232.6A Pending CN115294157A (en) 2022-08-11 2022-08-11 Pathological image processing method, model and equipment

Country Status (1)

Country Link
CN (1) CN115294157A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012353A (en) * 2023-02-07 2023-04-25 中国科学院重庆绿色智能技术研究院 Digital pathological tissue image recognition method based on graph convolution neural network
CN116152574A (en) * 2023-04-17 2023-05-23 厦门大学 A Pathological Image Classification Method Based on Multi-stage Information Extraction and Memory
CN116205928A (en) * 2023-05-06 2023-06-02 南方医科大学珠江医院 Image segmentation processing method, device and equipment for laparoscopic surgery video and medium
CN118781138A (en) * 2024-09-10 2024-10-15 安徽中医药大学 A medical image segmentation method and system based on structural constraints
CN119324071A (en) * 2024-12-12 2025-01-17 杭州师范大学 Pathological section curative effect prediction method based on graph convolution network

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012353A (en) * 2023-02-07 2023-04-25 中国科学院重庆绿色智能技术研究院 Digital pathological tissue image recognition method based on graph convolution neural network
CN116152574A (en) * 2023-04-17 2023-05-23 厦门大学 A Pathological Image Classification Method Based on Multi-stage Information Extraction and Memory
CN116205928A (en) * 2023-05-06 2023-06-02 南方医科大学珠江医院 Image segmentation processing method, device and equipment for laparoscopic surgery video and medium
CN118781138A (en) * 2024-09-10 2024-10-15 安徽中医药大学 A medical image segmentation method and system based on structural constraints
CN118781138B (en) * 2024-09-10 2025-01-28 安徽中医药大学 A medical image segmentation method and system based on structural constraints
CN119324071A (en) * 2024-12-12 2025-01-17 杭州师范大学 Pathological section curative effect prediction method based on graph convolution network

Similar Documents

Publication Publication Date Title
CN115294157A (en) Pathological image processing method, model and equipment
CN110084216B (en) Face recognition model training and face recognition method, system, device and medium
CN108132968B (en) A Weakly Supervised Learning Approach for Associated Semantic Primitives in Web Text and Images
CN110378366B (en) A Cross-Domain Image Classification Method Based on Coupling Knowledge Transfer
CN113705772A (en) Model training method, device and equipment and readable storage medium
CN112241481A (en) Cross-modal news event classification method and system based on graph neural network
CN108960184B (en) A Pedestrian Re-identification Method Based on Heterogeneous Component Deep Neural Network
CN107301246A (en) Chinese Text Categorization based on ultra-deep convolutional neural networks structural model
CN107330355B (en) Deep pedestrian re-identification method based on positive sample balance constraint
CN110097060B (en) Open set identification method for trunk image
CN104933428B (en) A kind of face identification method and device based on tensor description
CN110111365B (en) Training method and device based on deep learning and target tracking method and device
CN114693997B (en) Image description generation method, device, equipment and medium based on transfer learning
CN108664986B (en) Multi-task learning image classification method and system based on lp norm regularization
CN113553326A (en) Spreadsheet data processing method, device, computer equipment and storage medium
CN106528705A (en) Repeated record detection method and system based on RBF neural network
CN116109834A (en) A Few-Shot Image Classification Method Based on Attention Fusion of Local Orthogonal Features
CN112633169B (en) Pedestrian recognition algorithm based on improved LeNet-5 network
CN108154156A (en) Image Ensemble classifier method and device based on neural topic model
CN115205309A (en) A method and device for extraterrestrial image segmentation based on semi-supervised learning
CN115115016A (en) Method and device for training neural network
CN117809293A (en) Small sample image target counting method based on deep neural network
CN115952438B (en) Social platform user attribute prediction method, system, mobile device and storage medium
CN108460406B (en) Scene image attribute identification method based on minimum simplex fusion feature learning
CN114638823B (en) Full-slice image classification method and device based on attention mechanism sequence model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination