[go: up one dir, main page]

CN119324071A - Pathological section curative effect prediction method based on graph convolution network - Google Patents

Pathological section curative effect prediction method based on graph convolution network Download PDF

Info

Publication number
CN119324071A
CN119324071A CN202411828076.3A CN202411828076A CN119324071A CN 119324071 A CN119324071 A CN 119324071A CN 202411828076 A CN202411828076 A CN 202411828076A CN 119324071 A CN119324071 A CN 119324071A
Authority
CN
China
Prior art keywords
graph
pathological
cell
nodes
pathological section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411828076.3A
Other languages
Chinese (zh)
Inventor
冯博孩
杨德富
郭冰
刘喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Normal University
Original Assignee
Hangzhou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Normal University filed Critical Hangzhou Normal University
Priority to CN202411828076.3A priority Critical patent/CN119324071A/en
Publication of CN119324071A publication Critical patent/CN119324071A/en
Pending legal-status Critical Current

Links

Landscapes

  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于图卷积网络的病理切片疗效预测方法,包括以下步骤:获取病理切片图像,并对病理切片进行预处理;对病理切片图像中的细胞进行识别和分割,生成每个细胞的分割掩码;对每个细胞的分割掩码进行分类,识别每个细胞的类型,然后提取每个细胞对应的特征;根据细胞及特征定义图的节点和边,然后获取图的图数据作为图卷积网络的输入,输出病理切片的疗效结果;所述图数据包括节点特征矩阵和邻接矩阵。本发明基于细胞粒度的图卷积网络,对病理的疗效进行建模,通过构建邻接矩阵以及执行图卷积操作,能够有效地捕捉到病理切片图像之间的空间关系以及上下文信息,从而弥补了在病理特征融合过程中的不足。

The present invention discloses a method for predicting the efficacy of pathological slices based on a graph convolutional network, comprising the following steps: obtaining a pathological slice image and preprocessing the pathological slice; identifying and segmenting the cells in the pathological slice image to generate a segmentation mask for each cell; classifying the segmentation mask for each cell to identify the type of each cell, and then extracting the features corresponding to each cell; defining the nodes and edges of the graph according to the cells and features, and then obtaining the graph data of the graph as the input of the graph convolutional network, and outputting the efficacy results of the pathological slice; the graph data includes a node feature matrix and an adjacency matrix. The present invention is based on a graph convolutional network at a cell granularity, models the efficacy of pathology, and can effectively capture the spatial relationship and contextual information between pathological slice images by constructing an adjacency matrix and performing a graph convolution operation, thereby making up for the deficiencies in the pathological feature fusion process.

Description

Pathological section curative effect prediction method based on graph convolution network
Technical Field
The invention belongs to the technical field of medical image processing, and relates to a pathological section curative effect prediction method based on a graph rolling network.
Background
In the field of medical pathology, pathology feature fusion techniques have received widespread attention in recent years. With the rapid development of medical pathology and pathology, combining pathological features with pathological features has become an important method for improving disease diagnosis and prediction accuracy. Accurate medicine represented by immunotherapy has great potential in the field of cancer, but the existing model has obvious defect in analyzing tumor immune microenvironment based on pathological data to predict the curative effect of immunotherapy, which has become an important bottleneck limiting clinical application.
Traditionally, pathology studies have typically been performed by dividing tissue sections into small pieces (patches) and then performing feature fusion using the method of multi-instance learning (MILs). Although MILs have met with some success in pathology image analysis, it has some significant limitations in handling Patch fusion, particularly in considering the spatial relationship between patches:
1. Ignoring spatial context information, in the MILs framework, each Patch is considered an independent entity, while the spatial relationship between them is ignored. This means that even though neighboring Patches may contain shared important biological information, MIL cannot take advantage of this spatial context to improve the accuracy of the prediction. This defect directly leads to the following problems in pathology image analysis:
(1) The immune microenvironment analysis is incomplete, and key features in the tumor immune microenvironment, such as immune cell infiltration modes, feature distribution of tumor boundary areas and the like, are generally dependent on space information crossing Patch for comprehensive analysis. The prior method can not capture the spatial contexts, so that the analysis depth and the accuracy of the immune microenvironment are limited.
(2) The robustness of the prediction result is poor, and the lack of spatial information supplement ensures that the model shows instability when processing pathological images with high heterogeneity, and the prediction result is easily influenced by single Patch characteristics, thereby reducing the reliability of the prediction of the curative effect of immunotherapy.
2. Fixed spatial relationship assumption although in some MILs methods spatial information may be indirectly considered by designing specific pooling strategies, this typically requires making fixed assumptions about the spatial relationship between Patches. This assumption may not be applicable to all types of pathology images, limiting the versatility and flexibility of the model, and further imposing the following limitations:
(1) Limited versatility-pathological images may exhibit different structural characteristics under different disease types, tissue sites or staining conditions. The fixed spatial relationship assumption does not adapt to these changes so that the model exhibits significant performance degradation when applied across scenes.
(2) The lack of expression of spatial features fixed assumptions limit the ability of the model to capture complex spatial relationships, such as nonlinear interactions between tumor cells and immune cells. Thus, the model exhibits a low interpretation effort in predicting complex biological phenomena such as tumor immune escape mechanisms.
In summary, a new method for realizing pathological image analysis is needed to make up for the defects.
Disclosure of Invention
The first object of the present invention is to provide a pathological section curative effect prediction method based on a graph rolling network, which can effectively capture the spatial relationship and the context information between the Patches by constructing an adjacency matrix and executing the graph rolling operation, thereby overcoming the defects in the pathological feature fusion process.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
s1, acquiring pathological section images, and preprocessing pathological sections;
S2, identifying and dividing cells in the pathological section image by adopting a Mask R-CNN model, and generating a division Mask of each cell;
step S3, classifying the segmentation masks of each cell by adopting DenseNet model 121, and identifying the cell types of the cells Simultaneously, extracting the corresponding characteristics of each cell by using DenseNet model;
Step S4, defining each cell as a node of the graph, calculating Euclidean distance between the characteristics of any two nodes of the same type, defining a threshold value for any two nodesAndIf the Euclidean distance between the features is smaller than the threshold value, thenAndThere is an edge between, marked asTraversing all nodes to construct a graphWhereinRepresenting a set of nodes that are to be joined,Representing a set of edges;
step S5, obtaining a graph The method comprises the steps of taking graph data of a graph rolling network (GCN) as input of the GCN, and predicting and outputting curative effect results of pathological sections through the GCN, wherein the graph data comprises a node characteristic matrix and an adjacent matrix.
The second object of the present invention is to provide a pathological section curative effect prediction system based on graph rolling network for implementing the method, which comprises the following modules:
The image preprocessing module is used for acquiring pathological section images and preprocessing the pathological section images;
The cell identification and classification module is used for carrying out cell identification and classification on the preprocessed pathological section image to obtain the type and the characteristics of each cell;
the diagram construction module is used for constructing a diagram containing a node characteristic matrix and an adjacent matrix according to the type and the characteristics of each cell;
and the curative effect prediction module is used for modeling and training the graph data by utilizing the graph convolution network and outputting a curative effect prediction result.
A third object of the present invention is to provide an electronic device comprising a processor and a memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the above method.
It is a fourth object of the present invention to provide a machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the above-described method.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, cells are used as nodes of the graph, modeling is performed on the cell granularity by using a graph rolling network (GCN), the spatial context relationship among the cells is fully considered, and the deep learning characteristics of the cells can be effectively extracted. According to the invention, cells are separated from a pathological image through example segmentation, and through the side and point relationship of the graph structure, the expression of cell characteristics is enhanced, the understanding of the interrelation among cells is improved, the accurate modeling of pathological curative effect is finally realized, the cell type can be identified, and the accurate prognosis prediction can be carried out. The method combines the advantages of deep learning and graph rolling network, can better capture complex space and type characteristics among cells in pathological images, and has higher accuracy and interpretation.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The invention is further described below with reference to examples and figures.
As shown in FIG. 1, the pathological section curative effect prediction method based on graph convolution network of the invention comprises the following steps:
before the invention is applied, pathological sections (white SLIDE IMAGE, WSI) are cut into small blocks (patches), and then the dyeing standardization is carried out by adopting a Macenko method, so that color deviation caused by different scanning equipment and dyeing technology is eliminated, and the consistency of image colors is ensured.
(1) Mask RCNN cell recognition based
The present invention selects a Mask R-CNN (Region-based Convolutional Neural Networks) based object detection and segmentation model that is capable of simultaneously identifying objects in an image and generating a corresponding pixel level segmentation Mask. The specific flow is as follows:
a) Model initialization
1. Backbone network selection ResNet-101 is chosen as a feature extractor for Mask R-CNN to capture complex pathological features with its strong characterization capabilities.
2. Parameter initialization the present invention uses ImageNet pre-trained weights to initialize the backbone network of the model, which can speed up convergence and improve the initial performance of the model.
3. RPN (Region Proposal Network, regional proposal network) is characterized in that a Mask R-CNN is provided with an RPN module for generating candidate regions, and the invention adjusts the size (anchor size) and the aspect ratio (aspect ratio) of an anchor frame in the RPN to adapt to specific cell morphology in a pathological image.
B) Super parameter selection
In the training process of Mask R-CNN, the following super parameters can be adjusted according to actual conditions:
1. Learning rate-initial learning rate was set to 0.001, using a learning rate scheduling strategy with cosine annealing to balance training speed and convergence.
2. Batch size-initial batch size is set to 2 and adjustments are made accordingly depending on GPU memory capacity.
3. And the optimizer is suitable for processing sparse data and has stronger self-adaptability.
4. Positive-negative sample ratio in RPN stage, the invention sets the ratio of positive-negative samples to 1:3 to ensure more accurate candidate region generation for cell region.
C) Model training
1. Multiple GPU training to speed up model training, the present invention will employ multiple GPU parallel computing. If the data size is large, the invention can consider using mixed precision training to reduce the memory occupation.
2. Loss function the loss function consists of classification loss, frame regression loss, mask segmentation loss. These losses will be balanced in a weighted manner and the specific weight parameters will be adjusted experimentally.
3. Model training strategy, namely, a 5-fold cross-validation (five-fold cross-validation) method is used for ensuring the robustness and generalization of the model. Meanwhile, an early stop system is added in the training process, so that overfitting is avoided.
(2) Densenet121 cell type recognition
In the Densenet121 model for cell type identification process, the performance and accuracy of the model is improved by:
a) Model initialization:
Densenet121, pre-trained on ImageNet, was used as a base model to inherit the feature extraction capabilities in natural image large-scale datasets. This facilitates rapid convergence and optimization of the model on the pathology image data.
In the initialization stage of the model weight, a migration learning strategy matched with pathological data is adopted, and the model is stably transited to the learning process of a specific task, so that the stability and the adaptability of the initial state of the model are ensured.
B) Loss function and optimizer:
the invention uses cross entropy loss function for multi-classification task of cell type to measure the difference between predicted result and actual class label. The cross entropy loss function can effectively solve the unbalanced data distribution problem.
By adopting the Adam optimizer, the weight attenuation mechanism is introduced on the basis of the Adam, the generalization capability of the model is further improved, and the oscillation phenomenon in weight updating can be effectively controlled, so that the training stability is improved.
C) Learning rate adjustment strategy:
To dynamically adjust the learning rate, a cosine anneal learning rate strategy is used. The initial learning rate is set to 0.001, and the learning rate gradually decreases with the training so as to avoid the model from falling into local optimum.
The learning rate varies with training period according to the formula:
Wherein, The value of the water is set to be 0,The temperature was set to be 0.001,For the current number of training cycles,Is the total training period. The strategy can smoothly adjust the learning rate and improve the fine adjustment precision of the model at the end stage of training.
(3) Cell type and cell characteristics
The present invention has identified the cell type for any patch by the previous Mask RCNN, and DenseNet121 modelsWhile for each cell we can use the penultimate layer of DenseNet model 121 to extract its deep learning features, here denoted asWhere i is all cells in the current patch that have been identified by the MaskRCNN model.
(4) GCN graph structure construction
In the definition of the graph of the present invention, each pathological cell is considered as a node in the graph. The connection relationship between nodes is represented by edges. Specifically, the present invention may formally define the construction process of the graph as follows:
Node & edge definition:
Is provided with Representing a collection of nodes of pathological cells, whereinIs the total number of pathological cells, and is the total number of pathological cells,Represent the firstNodes of all cells in the individual pathological patch.
Edge between pathological cells for any two cell nodes of the same typeAndThe invention calculates the Euclidean distance between its features, defines a threshold (typically over-model parameters, typically greater than 0.6 or 0.7), for anyAndIf the Euclidean distance between the features is less than the threshold, there is an edge between them, denoted as
The figure shows:
Based on the definition of the nodes and the edges, the invention can construct a graph WhereinIs a collection of nodes that are configured to be connected,Is an edge setIncluding all edges between pathological cells and between the imaging subregion and the pathological cells.
(5) Prediction of prognosis by GCN model
Preparing graph data:
according to the graph construction method, graph data including node characteristic matrix is prepared (WhereinIs the total number of nodes that are to be counted,Is the feature dimension) and adjacency matrix. Adjacency matrixRepresenting the connection relationship between nodes, i.e. if the nodesSum nodeWith edges in between, thenOtherwise
Model initialization:
parameters of the GCN model are initialized, including weight matrices for each graph convolution layer. These parameters may be initialized randomly or with pre-trained weights.
Loss calculation and optimization:
And calculating a loss function according to the output of the GCN model and the real label. The loss function may be a cross entropy loss for classification tasks or a mean square error loss for regression tasks.
Gradients of the loss function with respect to the model parameters are calculated using a back-propagation algorithm, and the model parameters are updated using an optimizer (e.g., adam) to minimize the loss.
Learning rate:
meanwhile, in order to better generalize, the invention carefully sets the learning rate. The invention adopts a cosine decay learning rate algorithm. The learning rate of the invention is set as follows:
Wherein, ,,Respectively representing a minimum learning rate, a maximum learning rate and an iteration cycle number.
The above embodiments are illustrative of the basic principles, the principal features, and the advantages of the present invention. And the invention is merely illustrative in technical terms and not limiting in scope. Modifications made according to the above embodiments are within the scope of the present invention.

Claims (9)

1. The pathological section curative effect prediction method based on the graph convolution network is characterized by comprising the following steps of:
s1, acquiring pathological section images, and preprocessing pathological sections;
s2, identifying and dividing cells in the pathological section image, and generating a division mask of each cell;
step S3, classifying the segmentation masks of each cell, identifying the type of each cell, and extracting the corresponding characteristics of each cell;
Step S4, defining each cell as a node of the graph, calculating Euclidean distance between the characteristics of any two nodes of the same type, judging whether edges exist between the nodes according to the Euclidean distance, traversing all the nodes, and constructing to obtain the graph WhereinRepresenting a set of nodes that are to be joined,Representing a set of edges;
step S5, obtaining a graph The graph data of the pathological section is used as input of a graph rolling network GCN, and the graph data comprises a node characteristic matrix and an adjacent matrix.
2. The method for predicting the efficacy of a pathological section based on a graph rolling network according to claim 1, wherein the preprocessing in step S1 includes dividing the pathological section image into small blocks and then normalizing the small blocks by Macenko staining.
3. The method for predicting the efficacy of pathological sections based on a graph rolling network according to claim 1, wherein a Mask R-CNN model is adopted in step S2.
4. The method for predicting the efficacy of a pathological section based on a graph rolling network according to claim 1, wherein a DenseNet121 model is adopted in the step S3.
5. The pathological section curative effect prediction method based on graph rolling network according to claim 1, wherein in step S4:
Is provided with Representing a collection of nodes of pathological cells, whereinIs the total number of pathological cells, and is the total number of pathological cells,Represent the firstNodes of all cells in the individual pathological patch;
for any two cell nodes of the same type AndCalculating Euclidean distance between its features, defining a threshold value for any of the featuresAndIf the Euclidean distance between the features is smaller than the threshold value, thenAndThere is an edge between, marked as
6. The method of claim 1, wherein the graph data in step S5 includes a node feature matrixAdjacency matrixWhereinIs the total number of nodes that are to be counted,Is a feature dimension, adjacency matrixRepresenting the connection relationship between nodes, namely: Sum node With edges in between, thenOtherwise
7. A graph-convolution-network-based pathological slice efficacy prediction system implementing the method of any one of claims 1-6, comprising the following modules:
The image preprocessing module is used for acquiring pathological section images and preprocessing the pathological section images;
The cell identification and classification module is used for carrying out cell identification and classification on the preprocessed pathological section image to obtain the type and the characteristics of each cell;
the diagram construction module is used for constructing a diagram containing a node characteristic matrix and an adjacent matrix according to the type and the characteristics of each cell;
and the curative effect prediction module is used for modeling and training the graph data by utilizing the graph convolution network and outputting a curative effect prediction result.
8. An electronic device comprising a processor and a memory storing machine executable instructions executable by the processor to implement the method of any one of claims 1-6.
9. A machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of any one of claims 1-6.
CN202411828076.3A 2024-12-12 2024-12-12 Pathological section curative effect prediction method based on graph convolution network Pending CN119324071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411828076.3A CN119324071A (en) 2024-12-12 2024-12-12 Pathological section curative effect prediction method based on graph convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411828076.3A CN119324071A (en) 2024-12-12 2024-12-12 Pathological section curative effect prediction method based on graph convolution network

Publications (1)

Publication Number Publication Date
CN119324071A true CN119324071A (en) 2025-01-17

Family

ID=94230563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411828076.3A Pending CN119324071A (en) 2024-12-12 2024-12-12 Pathological section curative effect prediction method based on graph convolution network

Country Status (1)

Country Link
CN (1) CN119324071A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462052A (en) * 2020-03-16 2020-07-28 清华大学 Medical image analysis method and system based on graph neural network
CN115294157A (en) * 2022-08-11 2022-11-04 上海交通大学 Pathological image processing method, model and equipment
US20230177682A1 (en) * 2020-05-06 2023-06-08 The Board Of Regents Of The University Of Texas System Systems and methods for characterizing a tumor microenvironment using pathological images
CN116884603A (en) * 2023-07-17 2023-10-13 上海大学 A method for predicting the efficacy of immunotherapy for non-small cell lung cancer
CN116912823A (en) * 2023-07-27 2023-10-20 贵州医科大学附属医院 A deep learning-based cell identification method and system for renal pathological slices
CN118262913A (en) * 2024-03-12 2024-06-28 中国科学院深圳先进技术研究院 Prognostic analysis method, system, device and medium based on multi-scale pathological images
CN118643371A (en) * 2024-06-17 2024-09-13 西南科技大学 A method for predicting cell subpopulation distribution in pancreatic cancer pathological images
CN119049548A (en) * 2024-07-25 2024-11-29 南京航空航天大学 Gene mutation prediction method combined with tumor microenvironment information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462052A (en) * 2020-03-16 2020-07-28 清华大学 Medical image analysis method and system based on graph neural network
US20230177682A1 (en) * 2020-05-06 2023-06-08 The Board Of Regents Of The University Of Texas System Systems and methods for characterizing a tumor microenvironment using pathological images
CN115294157A (en) * 2022-08-11 2022-11-04 上海交通大学 Pathological image processing method, model and equipment
CN116884603A (en) * 2023-07-17 2023-10-13 上海大学 A method for predicting the efficacy of immunotherapy for non-small cell lung cancer
CN116912823A (en) * 2023-07-27 2023-10-20 贵州医科大学附属医院 A deep learning-based cell identification method and system for renal pathological slices
CN118262913A (en) * 2024-03-12 2024-06-28 中国科学院深圳先进技术研究院 Prognostic analysis method, system, device and medium based on multi-scale pathological images
CN118643371A (en) * 2024-06-17 2024-09-13 西南科技大学 A method for predicting cell subpopulation distribution in pancreatic cancer pathological images
CN119049548A (en) * 2024-07-25 2024-11-29 南京航空航天大学 Gene mutation prediction method combined with tumor microenvironment information

Similar Documents

Publication Publication Date Title
Shan et al. Automatic skin lesion segmentation based on FC-DPN
CN111723674B (en) Remote sensing image scene classification method based on Markov chain Monte Carlo and variation deduction and semi-Bayesian deep learning
CN112017191A (en) Method for establishing and segmenting liver pathology image segmentation model based on attention mechanism
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN110705565A (en) Lymph node tumor region identification method and device
WO2021073279A1 (en) Staining normalization method and system for digital pathological image, electronic device and storage medium
CN110689525A (en) Method and device for identifying lymph nodes based on neural network
CN114581434A (en) Pathological image processing method and electronic equipment based on deep learning segmentation model
CN113724842B (en) Cervical tissue pathology auxiliary diagnosis method based on attention mechanism
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
Chakraborty et al. Adaptive geometric tessellation for 3D reconstruction of anisotropically developing cells in multilayer tissues from sparse volumetric microscopy images
CN112614093A (en) Breast pathology image classification method based on multi-scale space attention network
CN114897782A (en) Gastric cancer pathological section image segmentation prediction method based on generating type countermeasure network
CN114862763A (en) A segmentation prediction method for gastric cancer pathological slice images based on EfficientNet
CN119324071A (en) Pathological section curative effect prediction method based on graph convolution network
Jia et al. Adjacent age classification algorithm of yellow-feathered chickens based on multi-scale feature fusion
CN116071318B (en) Image screening method and system
Zhu et al. Computer image analysis for various shading factors segmentation in forest canopy using convolutional neural networks
CN114627010B (en) Dyeing space migration method based on dyeing density map
CN116978001A (en) Waxberry quality identification method based on light CNN and regularized fine adjustment
CN116523877A (en) A method for tumor block segmentation in brain MRI images based on convolutional neural network
CN117197649A (en) Visual inventory method for building materials based on lightweight deep neural network
CN116091832A (en) Hyperspectral Image Classification Method of Tumor Cell Slices Based on Autoencoder Network
CN111862003B (en) Medical image target information acquisition method, device, equipment and storage medium
Li et al. Reproducibility in deep learning algorithms for digital pathology applications: a case study using the CAMELYON16 datasets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination