CN115294157A - Pathological image processing method, model and equipment - Google Patents
Pathological image processing method, model and equipment Download PDFInfo
- Publication number
- CN115294157A CN115294157A CN202210960232.6A CN202210960232A CN115294157A CN 115294157 A CN115294157 A CN 115294157A CN 202210960232 A CN202210960232 A CN 202210960232A CN 115294157 A CN115294157 A CN 115294157A
- Authority
- CN
- China
- Prior art keywords
- pathological image
- model
- image processing
- layer
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001575 pathological effect Effects 0.000 title claims abstract description 49
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 230000004927 fusion Effects 0.000 claims abstract description 29
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 25
- 230000014509 gene expression Effects 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 11
- 108090000623 proteins and genes Proteins 0.000 claims abstract description 8
- 210000004027 cell Anatomy 0.000 claims description 28
- 210000004940 nucleus Anatomy 0.000 claims description 10
- 238000000034 method Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 210000003855 cell nucleus Anatomy 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims 1
- 238000004590 computer program Methods 0.000 claims 1
- 238000013528 artificial neural network Methods 0.000 abstract description 3
- 208000032818 Microsatellite Instability Diseases 0.000 description 6
- 208000005718 Stomach Neoplasms Diseases 0.000 description 4
- 206010017758 gastric cancer Diseases 0.000 description 4
- 201000011549 stomach cancer Diseases 0.000 description 4
- 206010009944 Colon cancer Diseases 0.000 description 3
- 208000001333 Colorectal Neoplasms Diseases 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 230000007170 pathology Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 101001117317 Homo sapiens Programmed cell death 1 ligand 1 Proteins 0.000 description 1
- 102100024216 Programmed cell death 1 ligand 1 Human genes 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 239000003446 ligand Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30092—Stomach; Gastric
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于生物和病理领域,特别涉及一种病理图像处理方法、模型和设备。The invention belongs to the field of biology and pathology, and in particular relates to a pathological image processing method, model and equipment.
背景技术Background technique
在生物学和病理学上,细胞形态学中的结构信息,比如,细胞的聚集情况,细胞之间的距离等,均体现了基因标记物(如结直肠癌的微卫星不稳定性)的表达。但是,关于如何利用该特点用于基因标记物预测,却鲜有提及。In biology and pathology, structural information in cell morphology, such as cell aggregation, distance between cells, etc., reflect the expression of genetic markers (such as microsatellite instability in colorectal cancer) . However, there is little mention about how to use this feature for gene marker prediction.
发明内容Contents of the invention
本发明实施例之一,一种病理图像处理方法,在接收到的病理图像中构建细胞图,该细胞图包含有细胞结构信息;One of the embodiments of the present invention is a pathological image processing method, which constructs a cell map in the received pathological image, and the cell map contains cell structure information;
将所述细胞图输入经过训练的病理图像处理模型,获得融合特征为HO用于预测基因标记物的表达。The cell map is input into the trained pathological image processing model, and the fusion feature is obtained as HO for predicting the expression of gene markers.
本发明实施例显式地以图的方式提取了病理图像中的细胞结构信息,利用图卷积神经网络从图中提取几何信息特征。该特征与卷积神经网络从病理图像中提取的特征融合,用于预测基因标记物的表达。The embodiment of the present invention explicitly extracts the cell structure information in the pathological image in the form of a graph, and uses a graph convolutional neural network to extract geometric information features from the graph. This feature was fused with features extracted from pathological images by a convolutional neural network to predict the expression of gene markers.
本发明提出了图卷积和卷积神经网络的融合模型,用于病理图像的基因标记物预测。在3个病理数据集,即结直肠癌病理图像预测微卫星不稳定性、胃癌预测微卫星不稳定性(MSI)和胃癌预测细胞程序性死亡-配体1(PD-L1)表达病理数据集,均取得了显著的实验提升,分类的准确性比只采用卷积神经网络的病理图像处理模型有了明显提升。The present invention proposes a fusion model of graph convolution and convolutional neural network for gene marker prediction of pathological images. In 3 pathological datasets, namely colorectal cancer pathological images predicting microsatellite instability, gastric cancer predicting microsatellite instability (MSI) and gastric cancer predicting programmed cell death-ligand 1 (PD-L1) expression pathology dataset , have achieved significant experimental improvements, and the accuracy of classification has been significantly improved compared with the pathological image processing model that only uses convolutional neural networks.
附图说明Description of drawings
通过参考附图阅读下文的详细描述,本发明示例性实施方式的上述以及其他目的、特征和优点将变得易于理解。在附图中,以示例性而非限制性的方式示出了本发明的若干实施方式,其中:The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily understood by reading the following detailed description with reference to the accompanying drawings. In the drawings, several embodiments of the invention are shown by way of illustration and not limitation, in which:
图1根据本发明实施例之一的病理图像处理模型结构示意图。Fig. 1 is a schematic structural diagram of a pathological image processing model according to one embodiment of the present invention.
具体实施方式Detailed ways
发明要解决的问题是,如何从组织病理学图像中利用图卷积神经网络提取细胞结构化信息协助卷积神经网络用于基因标记物预测。The problem to be solved by the invention is how to use the graph convolutional neural network to extract cell structural information from histopathological images to assist the convolutional neural network in gene marker prediction.
根据一个或者多个实施例,一种病理图像处理方法,包含两个阶段,第一阶段从病理图像中提取结构化的图信息,第二阶段提供一种图和图像融合模型(即图卷积和卷积神经网络的融合模型)。According to one or more embodiments, a pathological image processing method includes two stages, the first stage extracts structured graph information from pathological images, and the second stage provides a graph and image fusion model (ie graph convolution and convolutional neural network fusion model).
第一阶段,从病理图像中构建细胞图:利用一个预训练好的细胞核分割神经网络(比如UNet,TransUNet等)从组织病理图像图像中分割出细胞核区域,每一个分割出的细胞核区域构成了图的一个节点,记做vj,其中下标j是对细胞核的一个编号。The first stage is to construct a cell map from the pathological image: use a pre-trained nucleus segmentation neural network (such as UNet, TransUNet, etc.) to segment the nucleus region from the histopathological image image, and each segmented nucleus region constitutes a map A node of , denoted as v j , where the subscript j is a number for the nucleus.
接下来,根据细胞核区域重心之间的距离计算图节点之间边的权重。图节点vj和vi之间的边的权重wij的计算公式如下:Next, the weights of edges between graph nodes are calculated based on the distance between the centroids of the nucleus regions. The weight w ij of the edge between graph nodes v j and v i is calculated as follows:
其中,d(vi,vj)代表了细胞核vj和vi重心之间的距离,dc代表临界距离,决定了最大的细胞距离作用范围,是一个超参数,这个超参数由病理图像的放大倍率决定。这里的距离以像素作为单位,可以需要根据图像的倍率进行换算。例如,当实际物理距离是50微米。根据不同的放大倍率进行换算。如20x放大率下(0.5微米/像素)dc取值为40个像素。Among them, d(v i , v j ) represents the distance between the cell nucleus v j and the center of gravity of v i , d c represents the critical distance, which determines the maximum range of cell distance, and is a hyperparameter, which is determined by pathological images depends on the magnification. The distance here is in pixels, which may need to be converted according to the magnification of the image. For example, when the actual physical distance is 50 microns. Conversions are made for different magnifications. For example, at 20x magnification (0.5 μm/pixel), the value of dc is 40 pixels.
当细胞重心见的距离小于dc,则权重为否则权重为0,即这两个节点之间没有链接的边。每个图节点vj的节点特征由这块细胞核区域的预定义的radiomics特征确定,取决于下游任务调整radiomics特征的需要。在这个阶段,构造的图刻画了病理图像中细胞和细胞之间的形态学几何结构,用于下一个阶段图卷积神经网络的输入。When the distance between the center of gravity of the cell is less than d c , the weight is Otherwise the weight is 0, i.e. there is no link between these two nodes. The node characteristics of each graph node v j are determined by the predefined radiomics characteristics of this nuclear region, depending on the needs of downstream tasks to adjust the radiomics characteristics. At this stage, the constructed graph characterizes the morphological geometry of cells and between cells in the pathological image, which is used as the input of the graph convolutional neural network in the next stage.
在第二阶段,图卷积神经网络(GNN)和卷积神经网络(CNN)的融合模型。首先,训练一个GNN模型(模型的结构没有限制,诸如但是不限于GCN,GIN)用于从第一阶段构建的细胞图中提取一个刻画了病理图像几何结构的低维度特征HG,其中,下标G代表这个特征是刻画了病理图像的几何信息。即,GNN模型以细胞图作为输入,经过图卷积操作后得到一维的特征输出HG。同时,训练一个CNN模型(模型的结构没有限制,诸如但是不限于DenseNet,ResNet)用于从病理图像中提取图像层面的低维度特征HI,其中下标I代表这个特征是刻画 图像层面信息的,即CNN模型以病理图像作为输入,经过卷积操作后得到一维的特征输出HI。In the second stage, a fusion model of graph convolutional neural network (GNN) and convolutional neural network (CNN). First, train a GNN model (the structure of the model is not limited, such as but not limited to GCN, GIN) to extract a low-dimensional feature H G that describes the geometric structure of the pathological image from the cell map constructed in the first stage, where the following The mark G represents that this feature describes the geometric information of the pathological image. That is, the GNN model takes the cell graph as input, and obtains a one-dimensional feature output H G after the graph convolution operation. At the same time, train a CNN model (the structure of the model is not limited, such as but not limited to DenseNet, ResNet) is used to extract the low-dimensional feature H I of the image level from the pathological image, where the subscript I represents that this feature describes the information of the image level , that is, the CNN model takes the pathological image as input, and obtains a one-dimensional feature output H I after the convolution operation.
因为HG和HI分别是从病理图像提取的细胞图和病理图像本身通过GNN和CNN分别提取的,所以刻画了不同的特征,具有互补性。因此先将两部分低维特征拼接,随后通过一个融合层将这个拼接后的特征进一步的融合用于下游的基因表达任务的预测。其中,融合层包含两种策略:可学习的MLP层或者可学习的Transformer层。记录HG和HI拼接后得到的融合特征为HO,其中下标O代表这个特征是融合特征。整体的模型结构如图1所示。Because HG and HI are extracted from the cell map extracted from the pathological image and the pathological image itself is extracted by GNN and CNN respectively, they describe different features and are complementary. Therefore, the two parts of low-dimensional features are spliced first, and then the spliced features are further fused through a fusion layer for the prediction of downstream gene expression tasks . Among them, the fusion layer contains two strategies: a learnable MLP layer or a learnable Transformer layer. The fusion feature obtained after the splicing of H G and HI is recorded as HO , where the subscript O indicates that this feature is a fusion feature. The overall model structure is shown in Figure 1.
基于MLP的融合层模型:MLP融合策略采用多个MLP单元MLPBlock的迭代,得到融合特征HO的表达:MLP-based fusion layer model: The MLP fusion strategy uses the iteration of multiple MLP units MLPBlock to obtain the expression of the fusion feature HO:
HO=MLPBlock(...(MLPBlock(HC))) (2)H O =MLPBlock(...(MLPBlock(H C ))) (2)
其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作。Wherein, H C is a feature after H G and H I are linked, that is, H C =concat[H I , H G ], and concat is a link operation.
MLPBlock的表达如下,The expression of MLPBlock is as follows,
MLPBlock(H)=Dropout(ReLU(Linear(H))) (3)MLPBlock(H)=Dropout(ReLU(Linear(H))) (3)
其中,Linear是一个可学习的线性层,ReLU为ReLU激活函数,Dropout为Dropout正则化层。Among them, Linear is a learnable linear layer, ReLU is the ReLU activation function, and Dropout is the Dropout regularization layer.
基于Transformer的融合层模型:基于Transformer的融合策略表达如下:Transformer-based fusion layer model: Transformer-based fusion strategy is expressed as follows:
HO=Pooling(TransBlock(...(TransBlock(HC)))) (4)H O =Pooling(TransBlock(...(TransBlock(H C )))) (4)
其中,HC为HG和HI链接后的特征,即HC=concat[HI,HG],concat为链接操作。Wherein, H C is a feature after H G and H I are linked, that is, H C =concat[H I , H G ], and concat is a link operation.
Pooling为池化层,TransBlock是transformer的基本单位,表达如下:Pooling is the pooling layer, and TransBlock is the basic unit of transformer, expressed as follows:
TransBlock(H)=ResidualPreNorm(MLPBlock,ResidualPreNorm(MHSA,H)) (5)TransBlock(H) = ResidualPreNorm(MLPBlock, ResidualPreNorm(MHSA, H)) (5)
MHSA是多头的自注意力机制层(Multi-headed self attention),ResidualPreNorm是残差预先LayerNorm层。MLPBlock的定义同前述。MHSA is a multi-headed self-attention mechanism layer (Multi-headed self attention), and ResidualPreNorm is a residual pre-LayerNorm layer. The definition of MLPBlock is the same as above.
本发明的实验对比结果。Experimental comparison results of the present invention.
本实验利用公开的TCGA病理图像数据集的子集,构建了结直肠癌病理图像预测微卫星不稳定性(阴性vs阳性),胃癌预测微卫星不稳定性(阴性vs阳性),以及私有收集了胃癌数据预测PDL1表达。本发明的分类的准确性比只用卷积神经网络(包括ResNet,DenseNet和MobileNetV3)提升了6.0%以上的AUC。In this experiment, using a subset of the public TCGA pathological image dataset, we constructed colorectal cancer pathological images to predict microsatellite instability (negative vs positive), gastric cancer to predict microsatellite instability (negative vs positive), and a private collection of gastric cancer Data predict PDL1 expression. The accuracy of the classification of the present invention improves the AUC by more than 6.0% compared with only the convolutional neural network (including ResNet, DenseNet and MobileNetV3).
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of software products, and the computer software products are stored in a storage medium In, several instructions are included to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present invention. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes. .
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the protection scope of the present invention is not limited thereto. Any person familiar with the technical field can easily think of various equivalents within the technical scope disclosed in the present invention. Modifications or replacements shall all fall within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210960232.6A CN115294157A (en) | 2022-08-11 | 2022-08-11 | Pathological image processing method, model and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210960232.6A CN115294157A (en) | 2022-08-11 | 2022-08-11 | Pathological image processing method, model and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115294157A true CN115294157A (en) | 2022-11-04 |
Family
ID=83829044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210960232.6A Pending CN115294157A (en) | 2022-08-11 | 2022-08-11 | Pathological image processing method, model and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115294157A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012353A (en) * | 2023-02-07 | 2023-04-25 | 中国科学院重庆绿色智能技术研究院 | Digital pathological tissue image recognition method based on graph convolution neural network |
CN116152574A (en) * | 2023-04-17 | 2023-05-23 | 厦门大学 | A Pathological Image Classification Method Based on Multi-stage Information Extraction and Memory |
CN116205928A (en) * | 2023-05-06 | 2023-06-02 | 南方医科大学珠江医院 | Image segmentation processing method, device and equipment for laparoscopic surgery video and medium |
CN118781138A (en) * | 2024-09-10 | 2024-10-15 | 安徽中医药大学 | A medical image segmentation method and system based on structural constraints |
CN119324071A (en) * | 2024-12-12 | 2025-01-17 | 杭州师范大学 | Pathological section curative effect prediction method based on graph convolution network |
-
2022
- 2022-08-11 CN CN202210960232.6A patent/CN115294157A/en active Pending
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012353A (en) * | 2023-02-07 | 2023-04-25 | 中国科学院重庆绿色智能技术研究院 | Digital pathological tissue image recognition method based on graph convolution neural network |
CN116152574A (en) * | 2023-04-17 | 2023-05-23 | 厦门大学 | A Pathological Image Classification Method Based on Multi-stage Information Extraction and Memory |
CN116205928A (en) * | 2023-05-06 | 2023-06-02 | 南方医科大学珠江医院 | Image segmentation processing method, device and equipment for laparoscopic surgery video and medium |
CN118781138A (en) * | 2024-09-10 | 2024-10-15 | 安徽中医药大学 | A medical image segmentation method and system based on structural constraints |
CN118781138B (en) * | 2024-09-10 | 2025-01-28 | 安徽中医药大学 | A medical image segmentation method and system based on structural constraints |
CN119324071A (en) * | 2024-12-12 | 2025-01-17 | 杭州师范大学 | Pathological section curative effect prediction method based on graph convolution network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115294157A (en) | Pathological image processing method, model and equipment | |
CN110084216B (en) | Face recognition model training and face recognition method, system, device and medium | |
CN108132968B (en) | A Weakly Supervised Learning Approach for Associated Semantic Primitives in Web Text and Images | |
CN110378366B (en) | A Cross-Domain Image Classification Method Based on Coupling Knowledge Transfer | |
CN113705772A (en) | Model training method, device and equipment and readable storage medium | |
CN112241481A (en) | Cross-modal news event classification method and system based on graph neural network | |
CN108960184B (en) | A Pedestrian Re-identification Method Based on Heterogeneous Component Deep Neural Network | |
CN107301246A (en) | Chinese Text Categorization based on ultra-deep convolutional neural networks structural model | |
CN107330355B (en) | Deep pedestrian re-identification method based on positive sample balance constraint | |
CN110097060B (en) | Open set identification method for trunk image | |
CN104933428B (en) | A kind of face identification method and device based on tensor description | |
CN110111365B (en) | Training method and device based on deep learning and target tracking method and device | |
CN114693997B (en) | Image description generation method, device, equipment and medium based on transfer learning | |
CN108664986B (en) | Multi-task learning image classification method and system based on lp norm regularization | |
CN113553326A (en) | Spreadsheet data processing method, device, computer equipment and storage medium | |
CN106528705A (en) | Repeated record detection method and system based on RBF neural network | |
CN116109834A (en) | A Few-Shot Image Classification Method Based on Attention Fusion of Local Orthogonal Features | |
CN112633169B (en) | Pedestrian recognition algorithm based on improved LeNet-5 network | |
CN108154156A (en) | Image Ensemble classifier method and device based on neural topic model | |
CN115205309A (en) | A method and device for extraterrestrial image segmentation based on semi-supervised learning | |
CN115115016A (en) | Method and device for training neural network | |
CN117809293A (en) | Small sample image target counting method based on deep neural network | |
CN115952438B (en) | Social platform user attribute prediction method, system, mobile device and storage medium | |
CN108460406B (en) | Scene image attribute identification method based on minimum simplex fusion feature learning | |
CN114638823B (en) | Full-slice image classification method and device based on attention mechanism sequence model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |