CN115018768A - An automatic assessment system for stent neointimal coverage based on OCT platform - Google Patents
An automatic assessment system for stent neointimal coverage based on OCT platform Download PDFInfo
- Publication number
- CN115018768A CN115018768A CN202210528451.7A CN202210528451A CN115018768A CN 115018768 A CN115018768 A CN 115018768A CN 202210528451 A CN202210528451 A CN 202210528451A CN 115018768 A CN115018768 A CN 115018768A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- oct
- mask
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本申请属于医疗影像处理技术领域,尤其涉及一种基于OCT平台的支架新生内膜覆盖率自动评估系统。The present application belongs to the technical field of medical image processing, and in particular relates to an automatic assessment system for stent neointimal coverage based on an OCT platform.
背景技术Background technique
通常血管中的支架包括内嵌支架和非内嵌支架两种,现有技术中,首先需要对采集到的光学相干断层扫描(Optical Coherence Tomography,OCT)图像中的支架和管腔进行识别,基于识别到的支架,测量得到支架与管腔之间的距离,通过比较支架与管腔之间的距离与预设距离阈值来确定识别到的支架是否为内嵌支架,然后根据判断结果确定内嵌支架愈合的状况,从而制定合理有效的医嘱。Generally, stents in blood vessels include embedded stents and non-embedded stents. In the prior art, the stent and lumen in the collected optical coherence tomography (OCT) images need to be identified first. For the recognized stent, the distance between the stent and the lumen is measured, and whether the recognized stent is an embedded stent is determined by comparing the distance between the stent and the lumen with the preset distance threshold, and then the embedded stent is determined according to the judgment result. The status of stent healing, so as to formulate reasonable and effective medical orders.
但是,上述方法中,若识别结果存在误差,则会导致测量得到的支架与管腔之间的距离不准确,影响内嵌支架的判断结果,降低识别精度,进而影响医生对内嵌支架愈合状况的判断。However, in the above method, if there is an error in the recognition result, the distance between the measured stent and the lumen will be inaccurate, which will affect the judgment result of the embedded stent, reduce the recognition accuracy, and then affect the doctor's evaluation of the healing status of the embedded stent. judgment.
发明内容SUMMARY OF THE INVENTION
本申请提供一种基于OCT平台的支架新生内膜覆盖率自动评估系统,可以通过对采集的OCT图像进行分析得到管腔与内嵌支架之间的新生内膜覆盖率。The present application provides an automatic evaluation system for stent neointimal coverage based on an OCT platform, which can obtain the neointimal coverage between the lumen and the embedded stent by analyzing the collected OCT images.
第一方面,本申请提供一种基于OCT平台的支架新生内膜覆盖率自动评估系统,该系统包括:获取模块、处理模块和评估模块,获取模块用于获取目标管腔的OCT图像,目标管腔中设置有内嵌支架;处理模块用于将OCT图像输入预设的图像分割模型中处理,输出OCT图像对应的掩膜图像,掩膜图像用于标识管腔和内嵌支架;评估模块用于根据掩膜图像评估管腔和内嵌支架之间的新生内膜覆盖率。In a first aspect, the present application provides an automatic evaluation system for stent neointimal coverage based on an OCT platform. The system includes: an acquisition module, a processing module, and an evaluation module. The acquisition module is used to acquire an OCT image of a target lumen, and the target tube There is an embedded stent in the cavity; the processing module is used to input the OCT image into the preset image segmentation model for processing, and output the mask image corresponding to the OCT image, and the mask image is used to identify the lumen and the embedded stent; the evaluation module uses To assess the neointimal coverage between the lumen and the embedded stent based on the mask image.
在第一方面的一个可能的实现方式中,预设的分割模型包括第一卷积层、第一下采样模块、第二下采样模块、第一上采样模块、第二上采样模块、第三上采样模块以及第一激活层,第一卷积层、第一下采样模块、第二下采样模块、所述第一上采样模块依次连接,第一上采样模块输出的特征图与第一下采样模块输出的特征图经第一拼接操作以及第二激活函数处理后输入至第二上采样模块;第一下采样模块输出的特征图输入至第三上采样模块处理后得到的特征图、第一卷积层输出的特征图以及第二上采样模块处理后的特征图经第二拼接操作以及第一激活函数处理后输出得到掩膜图像。In a possible implementation manner of the first aspect, the preset segmentation model includes a first convolution layer, a first downsampling module, a second downsampling module, a first upsampling module, a second upsampling module, a third The upsampling module and the first activation layer, the first convolutional layer, the first downsampling module, the second downsampling module, and the first upsampling module are connected in sequence, and the feature map output by the first upsampling module is the same as the first downsampling module. The feature map output by the sampling module is input to the second up-sampling module after being processed by the first splicing operation and the second activation function; the feature map output by the first down-sampling module is input to the feature map and the third up-sampling module. The feature map output by a convolution layer and the feature map processed by the second upsampling module are processed by the second stitching operation and the first activation function and output to obtain a mask image.
在第一方面的一个可能的实现方式中,第一下采样模块和第二下采样模块分别包括下采样层和第二卷积层,第一上采样模块、第二上采样模块以及第三上采样模块分别包括上采样层和第三卷积层;第一卷积层、第二卷积层和第三卷积层均采用same模式的卷积。In a possible implementation manner of the first aspect, the first downsampling module and the second downsampling module include a downsampling layer and a second convolutional layer, respectively, a first upsampling module, a second upsampling module, and a third upsampling module The sampling module includes an upsampling layer and a third convolutional layer respectively; the first convolutional layer, the second convolutional layer and the third convolutional layer all use the same mode convolution.
在第一方面的一个可能的实现方式中,图像分割模型的训练方式包括:搭建网络模型,网络模型包括图像分割初始模型和SE-ResNet模型;In a possible implementation manner of the first aspect, the training method of the image segmentation model includes: building a network model, and the network model includes an initial image segmentation model and a SE-ResNet model;
根据预设的损失函数和训练集,对网络模型进行训练,将图像分割初始模型训练得到训练后的图像分割模型,SE-ResNet模型用于提取训练集中OCT图像样本的预测边缘掩膜。According to the preset loss function and training set, the network model is trained, and the initial image segmentation model is trained to obtain the trained image segmentation model. The SE-ResNet model is used to extract the predicted edge mask of the OCT image samples in the training set.
在第一方面的一个可能的实现方式中,训练集包括OCT图像样本、每个OCT图像样本对应的掩膜图像样本以及每个OCT图像样本对应的边缘掩膜样本;损失函数用于约束OCT图像样本对应的预测图像与掩膜图像样本之间的误差,和OCT图像样本对应的预测边缘掩膜与边缘掩膜样本之间的误差,预测图像为图像分割初始模型对OCT图像样本进行处理后得到的图像,预测边缘掩膜为SE-ResNet模型对OCT图像样本进行处理后得到的图像。In a possible implementation of the first aspect, the training set includes OCT image samples, mask image samples corresponding to each OCT image sample, and edge mask samples corresponding to each OCT image sample; the loss function is used to constrain the OCT image The error between the predicted image corresponding to the sample and the mask image sample, and the error between the predicted edge mask and the edge mask sample corresponding to the OCT image sample, the predicted image is the initial image segmentation model to obtain after processing the OCT image sample The image, the predicted edge mask is the image obtained after processing the OCT image sample by the SE-ResNet model.
在第一方面的一个可能的实现方式中,上述SE-ResNet模型包括依次连接的第四卷积层、压缩层以及激励层,In a possible implementation manner of the first aspect, the above SE-ResNet model includes a fourth convolution layer, a compression layer, and an excitation layer that are connected in sequence,
第四卷积层的输出与激励层的输出作加权操作处理,加权操作处理后的输出与第四卷积层的输入经相加操作处理后输出边缘。The output of the fourth convolutional layer and the output of the excitation layer are processed by weighting operation, and the output after the weighting operation and the input of the fourth convolutional layer are processed by addition operation, and then the edge is output.
在第一方面的一个可能的实现方式中,上述激励层中的缩放系数为16。In a possible implementation manner of the first aspect, the scaling factor in the above excitation layer is 16.
第二方面,本申请提供一种新生内膜覆盖率的检测方法,该方法应用于第一方面或第一方面的任一可选方式所述的基于OCT平台的支架新生内膜覆盖率自动评估系统中,该方法包括:In a second aspect, the present application provides a method for detecting neointimal coverage, which is applied to the automatic assessment of the neointimal coverage of a stent based on the OCT platform described in the first aspect or any optional manner of the first aspect. In the system, the method includes:
获取目标管腔的OCT图像,目标管腔中设置有内嵌支架;Obtain an OCT image of the target lumen, and the target lumen is provided with an embedded stent;
将OCT图像输入预设的图像分割模型中处理,输出OCT图像对应的掩膜图像,掩膜图像用于标识管腔和内嵌支架;Input the OCT image into the preset image segmentation model for processing, and output the mask image corresponding to the OCT image, and the mask image is used to identify the lumen and the embedded stent;
根据掩膜图像评估管腔和内嵌支架之间的新生内膜覆盖率。The neointimal coverage between the lumen and the embedded stent was assessed based on the mask image.
在第二方面的一个可能的实现方式中,预设的分割模型包括第一卷积层、第一下采样模块、第二下采样模块、第一上采样模块、第二上采样模块、第三上采样模块以及第一激活层,第一卷积层、第一下采样模块、第二下采样模块、所述第一上采样模块依次连接,第一上采样模块输出的特征图与第一下采样模块输出的特征图经第一拼接操作以及第二激活函数处理后输入至第二上采样模块;第一下采样模块输出的特征图输入至第三上采样模块处理后得到的特征图、第一卷积层输出的特征图以及第二上采样模块处理后的特征图经第二拼接操作以及第一激活函数处理后输出得到掩膜图像。In a possible implementation manner of the second aspect, the preset segmentation model includes a first convolution layer, a first downsampling module, a second downsampling module, a first upsampling module, a second upsampling module, a third The upsampling module and the first activation layer, the first convolutional layer, the first downsampling module, the second downsampling module, and the first upsampling module are connected in sequence, and the feature map output by the first upsampling module is the same as the first downsampling module. The feature map output by the sampling module is input to the second up-sampling module after being processed by the first splicing operation and the second activation function; the feature map output by the first down-sampling module is input to the feature map and the third up-sampling module. The feature map output by a convolution layer and the feature map processed by the second upsampling module are processed by the second stitching operation and the first activation function and output to obtain a mask image.
在第二方面的一个可能的实现方式中,第一下采样模块和第二下采样模块分别包括下采样层和第二卷积层,第一上采样模块、第二上采样模块以及第三上采样模块分别包括上采样层和第三卷积层;第一卷积层、第二卷积层和第三卷积层均采用same模式的卷积。In a possible implementation manner of the second aspect, the first downsampling module and the second downsampling module respectively include a downsampling layer and a second convolutional layer, a first upsampling module, a second upsampling module and a third upsampling module The sampling module includes an upsampling layer and a third convolutional layer respectively; the first convolutional layer, the second convolutional layer and the third convolutional layer all use the same mode convolution.
在第二方面的一个可能的实现方式中,图像分割模型的训练方式包括:搭建网络模型,网络模型包括图像分割初始模型和SE-ResNet模型;In a possible implementation manner of the second aspect, the training method of the image segmentation model includes: building a network model, and the network model includes an initial image segmentation model and a SE-ResNet model;
根据预设的损失函数和训练集,对网络模型进行训练,将图像分割初始模型训练得到训练后的图像分割模型,SE-ResNet模型用于提取训练集中OCT图像样本的预测边缘掩膜。According to the preset loss function and training set, the network model is trained, and the initial image segmentation model is trained to obtain the trained image segmentation model. The SE-ResNet model is used to extract the predicted edge mask of the OCT image samples in the training set.
在第二方面的一个可能的实现方式中,训练集包括OCT图像样本、每个OCT图像样本对应的掩膜图像样本以及每个OCT图像样本对应的边缘掩膜样本;损失函数用于约束OCT图像样本对应的预测图像与掩膜图像样本之间的误差,和OCT图像样本对应的预测边缘掩膜与边缘掩膜样本之间的误差,预测图像为图像分割初始模型对OCT图像样本进行处理后得到的图像,预测边缘掩膜为SE-ResNet模型对OCT图像样本进行处理后得到的图像。In a possible implementation manner of the second aspect, the training set includes OCT image samples, mask image samples corresponding to each OCT image sample, and edge mask samples corresponding to each OCT image sample; the loss function is used to constrain the OCT image The error between the predicted image corresponding to the sample and the mask image sample, and the error between the predicted edge mask and the edge mask sample corresponding to the OCT image sample, the predicted image is the initial image segmentation model to obtain after processing the OCT image sample The image, the predicted edge mask is the image obtained after processing the OCT image sample by the SE-ResNet model.
在第二方面的一个可能的实现方式中,上述SE-ResNet模型包括依次连接的第四卷积层、压缩层以及激励层,In a possible implementation manner of the second aspect, the above SE-ResNet model includes a fourth convolution layer, a compression layer, and an excitation layer that are connected in sequence,
第四卷积层的输出与激励层的输出作加权操作处理,加权操作处理后的输出与第四卷积层的输入经相加操作处理后输出边缘。The output of the fourth convolutional layer and the output of the excitation layer are processed by weighting operation, and the output after the weighting operation and the input of the fourth convolutional layer are processed by addition operation, and then the edge is output.
在第二方面的一个可能的实现方式中,上述激励层中的缩放系数为16。In a possible implementation manner of the second aspect, the scaling factor in the above excitation layer is 16.
第三方面,本申请提供一种终端设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现如第二方面或第二方面的任一可选方式所述的方法。In a third aspect, the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements any of the second aspect or the second aspect when the processor executes the computer program. The method described in an optional manner.
第四方面,一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现如第二方面或第二方面的任一可选方式所述的方法。In a fourth aspect, a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the method described in the second aspect or any optional manner of the second aspect.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行如第二方面或第二方面的任一可选方式所述的方法。In a fifth aspect, an embodiment of the present application provides a computer program product, which when the computer program product runs on a terminal device, enables the terminal device to execute the method described in the second aspect or any optional manner of the second aspect.
本申请实施例与现有技术相比存在的有益效果是:The beneficial effects that the embodiments of the present application have compared with the prior art are:
通过本申请提供的一种基于OCT平台的支架新生内膜覆盖率自动评估系统,该系统直接利用预设的图像分割模型对目标管腔的OCT图像进行处理,得到与OCT图像对应的掩膜图像,然后根据上述掩膜图像评估得到管腔与内嵌支架之间的新生内膜覆盖率。直接采用预设的图像分割模型对OCT图像中的内嵌支架进行识别,能够有效避免利用OCT图像中支架和管腔的识别结果判断支架是否为内嵌支架,导致判断结果不准确,影响医生判断内嵌支架愈合状况的问题,加快了内嵌支架覆盖程度的检测速度,提高了判断结果的准确率。An automatic evaluation system for stent neointimal coverage based on an OCT platform provided in this application, the system directly uses a preset image segmentation model to process the OCT image of the target lumen to obtain a mask image corresponding to the OCT image. , and then evaluate the neointimal coverage between the lumen and the embedded stent according to the above mask image. Directly using the preset image segmentation model to identify the embedded stent in the OCT image can effectively avoid using the identification results of the stent and lumen in the OCT image to determine whether the stent is an embedded stent, resulting in inaccurate judgment results and affecting the doctor's judgment. The problem of the healing state of the embedded stent speeds up the detection of the coverage of the embedded stent and improves the accuracy of the judgment results.
附图说明Description of drawings
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for the present application. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1是本申请一实施例提供的一种图像分割模型的的结构示意图;1 is a schematic structural diagram of an image segmentation model provided by an embodiment of the present application;
图2是本申请一实施例提供的一种生成图像分割模型的流程示意图;FIG. 2 is a schematic flowchart of generating an image segmentation model provided by an embodiment of the present application;
图3是本申请一实施例提供的一种SE-ResNet模型的结构示意图;3 is a schematic structural diagram of a SE-ResNet model provided by an embodiment of the present application;
图4是本申请实施例提供的利用用不同的图像分割模型对同一图像处理的结果示意图;4 is a schematic diagram of a result of processing the same image with different image segmentation models provided by an embodiment of the present application;
图5是本申请一实施例提供的一种边缘拟合图像的示意图;5 is a schematic diagram of an edge fitting image provided by an embodiment of the present application;
图6是本申请实施例提供的一种基于OCT平台的支架新生内膜覆盖率自动评估系统系统的示意图;6 is a schematic diagram of an automatic assessment system for stent neointimal coverage based on an OCT platform provided by an embodiment of the present application;
图7是本申请实施例提供的一种终端设备的示意图。FIG. 7 is a schematic diagram of a terminal device provided by an embodiment of the present application.
具体实施方式Detailed ways
针对目前利用OCT图像中支架的识别结果判断支架是否为内嵌支架,导致判断结果不准确,影响医生判断内嵌支架愈合状况的问题,本申请提供一种基于OCT平台的支架新生内膜覆盖率自动评估系统,该系统直接采用预设的图像分割模型对OCT图像中的内嵌支架进行识别,有效避免了利用OCT图像中支架和管腔的识别结果判断支架是否为内嵌支架,导致判断结果不准确,影响医生判断内嵌支架愈合状况的问题,加快了内嵌支架覆盖程度的检测速度,提高了判断结果的准确率。Aiming at the problem of using the recognition result of the stent in the OCT image to determine whether the stent is an embedded stent, resulting in inaccurate judgment results and affecting the doctor's judgment on the healing status of the embedded stent, the present application provides a stent neointimal coverage rate based on the OCT platform. Automatic evaluation system, the system directly uses the preset image segmentation model to identify the embedded stent in the OCT image, which effectively avoids using the identification results of the stent and lumen in the OCT image to determine whether the stent is an embedded stent, resulting in the judgment result. Inaccurate, the problem that affects the doctor's judgment on the healing state of the embedded stent, speeds up the detection speed of the coverage of the embedded stent, and improves the accuracy of the judgment result.
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。The technical solutions of the present application will be described in detail below with specific examples. The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
如图1所示为本申请一实施例提供的一种图像分割模型的结构示意图。该图像分割模型可以部署在光学相干断层扫描(optical coherence tomography,OCT)图像采集设备中,也可以部署在与OCT图像采集设备相关联的其他控制设备中,例如,部署该图像分割模型的设备可以是智能手机、平板电脑、摄像机等移动终端,还可以是台式电脑、机器人、服务器等能够进行图像分割的其他设备。FIG. 1 is a schematic structural diagram of an image segmentation model provided by an embodiment of the present application. The image segmentation model can be deployed in an optical coherence tomography (OCT) image acquisition device, and can also be deployed in other control devices associated with the OCT image acquisition device. For example, the device deploying the image segmentation model can It is a mobile terminal such as a smartphone, a tablet computer, a camera, or other devices that can perform image segmentation, such as a desktop computer, a robot, and a server.
参见图1,本申请实施例提供的图像分割模型为密集型连接U-Net模型,密集型连接U-Net模型用于从OCT图像中提取融合了不同特征信息的掩膜图像。Referring to FIG. 1 , the image segmentation model provided by the embodiment of the present application is a densely connected U-Net model, and the densely connected U-Net model is used to extract a mask image fused with different feature information from an OCT image.
该密集型连接U-Net模型的深度为3层,具体包括第一卷积层、第一下采样模块、第二下采样模块、第一上采样模块、第二激活函数、第二上采样模块、第三上采样模块以及第一激活函数。第一卷积层、第一下采样模块、第二下采样模块以及第一上采样模块依次连接,第一上采样模块输出的特征图与第一下采样模块输出的特征图经第一拼接操作、第二激活函数处理后输入至第二上采样模块;第一下采样模块输出的特征图输入至第三上采样模块处理后得到的特征图、第一卷积层输出的特征图以及第二上采样模块处理后的特征图经第二拼接操作、第一激活函数处理后输出得到第一掩膜图像。The depth of the densely connected U-Net model is 3 layers, specifically including a first convolution layer, a first downsampling module, a second downsampling module, a first upsampling module, a second activation function, and a second upsampling module , a third upsampling module, and a first activation function. The first convolutional layer, the first down-sampling module, the second down-sampling module and the first up-sampling module are connected in sequence, and the feature map output by the first up-sampling module and the feature map output by the first down-sampling module are subjected to a first splicing operation , the second activation function is processed and input to the second upsampling module; the feature map output by the first downsampling module is input to the feature map obtained after processing by the third upsampling module, the feature map output by the first convolution layer and the second The feature map processed by the up-sampling module is processed by the second stitching operation and the first activation function and then output to obtain the first mask image.
应理解,第一下采样模块和第二下采样模块中均包括下采样层和第二卷积层,其中,下采样层通过最大池化层实现,经过下采样层处理后的特征图像的尺寸会缩小一半。对应的,第一上采样模块、第二上采样模块以及第三上采样模块中均包括上采样层和第三卷积层。It should be understood that both the first downsampling module and the second downsampling module include a downsampling layer and a second convolutional layer, wherein the downsampling layer is implemented by a maximum pooling layer, and the size of the feature image processed by the downsampling layer is will shrink by half. Correspondingly, the first upsampling module, the second upsampling module and the third upsampling module all include an upsampling layer and a third convolutional layer.
为了能够高效的融合到不同卷积层之间的特征信息,使输出特征图的尺寸大小与输入的特征图的尺寸大小保持一致。本申请实施例中,第一卷积层、下采样模块中的第二卷积层以及上采样模块中的第三卷积层均采用same模式的卷积。In order to efficiently fuse the feature information between different convolutional layers, the size of the output feature map should be consistent with the size of the input feature map. In the embodiment of the present application, the first convolutional layer, the second convolutional layer in the downsampling module, and the third convolutional layer in the upsampling module all use the same mode convolution.
需要说明的是,本申请提供的图像分割模型具备泛用性。可以应用于医学图像分割领域,能够提取医学图像中的器官、组织、内嵌支架等,完成不同器官的分割任务。也可以应用于其他以图像分割效果为评价指标的任务中。It should be noted that the image segmentation model provided in this application has generality. It can be applied to the field of medical image segmentation, and can extract organs, tissues, embedded stents, etc. in medical images, and complete the segmentation tasks of different organs. It can also be applied to other tasks where the image segmentation effect is the evaluation index.
可以理解的是,针对不同的图像分割任务,可以通过设计对应的训练集和损失函数来训练初始图像分割模型,从而得到能够适用于不同图像分割任务的图像分割模型。It can be understood that, for different image segmentation tasks, the initial image segmentation model can be trained by designing corresponding training sets and loss functions, so as to obtain image segmentation models that can be applied to different image segmentation tasks.
根据实际应用需求,训练该图像分割模型的执行主体与使用该图像分割模型进行图像分割任务的执行主体可以是相同的,也可以是不同的。According to actual application requirements, the execution subject for training the image segmentation model and the execution subject for performing the image segmentation task using the image segmentation model may be the same or different.
下面将以从OCT图像中分割管腔和内嵌支架分割任务为例,对本申请提供的图像分割模型的训练过程和效果进行示例性的说明。The following will exemplify the training process and effects of the image segmentation model provided by the present application by taking the segmentation task of lumen and embedded stent segmentation from OCT images as an example.
如图2所示为本申请实施例提供的一种生成图像分割模型的流程示意图,参见图2,首先构建网络模型,该网络模型包括图像分割初始模型(即密集型连接U-Net初始模型)和SE-ResNet模型。FIG. 2 shows a schematic flowchart of an image segmentation model generation provided by an embodiment of the present application. Referring to FIG. 2 , a network model is first constructed, and the network model includes an initial image segmentation model (that is, an initial U-Net model of intensive connection) and SE-ResNet model.
其中,如图3所示为本申请实施例提供的一种SE-ResNet模型的结构示意图,参见图3,该SE-ResNet模型包括第四卷积层、压缩层和激励层,第四卷积层的输出与激励层的输出作加权操作处理,加权操作处理后的输出与第四卷积层的输入经相加操作处理后输出边缘掩膜。3 shows a schematic structural diagram of a SE-ResNet model provided by an embodiment of the present application. Referring to FIG. 3, the SE-ResNet model includes a fourth convolution layer, a compression layer, and an excitation layer. The fourth convolution layer The output of the layer and the output of the excitation layer are processed by a weighting operation, and the output after the weighting operation and the input of the fourth convolution layer are processed by an addition operation to output an edge mask.
应理解,输入的OCT图像经第四卷积层处理后得到特征图X,其中,X∈RC×W×H,C表示特征图的通道数,W表示特征图的宽,H表示特征图的高。It should be understood that the input OCT image is processed by the fourth convolution layer to obtain a feature map X, where X∈R C×W×H , C represents the number of channels of the feature map, W represents the width of the feature map, and H represents the feature map height of.
得到的特征图X输入至压缩层中处理,输出得到Z,其中Z的计算公式如下公式(1),压缩层采用全局平均池化操作,即对输入至压缩层的特征图X进行全局空间信息压缩得到输出Z。The obtained feature map X is input to the compression layer for processing, and the output is Z, where the calculation formula of Z is as follows (1), the compression layer adopts the global average pooling operation, that is, the feature map X input to the compression layer is subjected to global spatial information. Compression yields output Z.
上述公式(1)中,W表示特征图的宽,H表示特征图的高,(i,j)表示特征图中的像素点。In the above formula (1), W represents the width of the feature map, H represents the height of the feature map, and (i, j) represents the pixels in the feature map.
将压缩处理后得到Z输入至激励层(包括间隔设置的两组全连接层和激活函数)处理后得到激励值S,其中S的计算公式如下:The Z obtained after compression processing is input to the excitation layer (including two sets of fully connected layers and activation functions set at intervals) to obtain the excitation value S, where the calculation formula of S is as follows:
S=σ(W2δ(W1Z)) (2)S=σ(W 2 δ(W 1 Z)) (2)
上述公式(2)中,W2δ(W1Z)和W1Z表示两个全连接层,σ为sigmoid函数,δ可以为ReLU(整流线性单元)激活函数,W1与W2分别为两个全连接层的参数矩阵。In the above formula (2), W 2 δ(W 1 Z) and W 1 Z represent two fully connected layers, σ is a sigmoid function, δ can be a ReLU (rectified linear unit) activation function, and W1 and W2 are two The parameter matrix of the fully connected layer.
值得说明的是,针对管腔和内嵌支架分割任务,本申请提供的激励层的缩放参数均为16,激励层的缩放参数可以根据不同的实际应用而定,本申请对此不作限定。It is worth noting that, for the lumen and embedded stent segmentation tasks, the scaling parameters of the excitation layer provided in this application are all 16, and the scaling parameters of the excitation layer can be determined according to different practical applications, which are not limited in this application.
将激励值S与第二卷积层处理后得到的特征图进行加权操作(scale操作),将加权操作后的特征图与输入的OCT图像作相加操作,得到边缘掩膜Xout,其中边缘掩膜Xout的计算公式如下:Perform a weighting operation (scale operation) on the feature map obtained after processing the excitation value S and the second convolution layer, and add the feature map after the weighting operation to the input OCT image to obtain an edge mask X out , where the edge The formula for calculating the mask X out is as follows:
Xout=Fscale(X,S) (3)X out =F scale (X,S) (3)
需要说明的是,该SE-ResNet模型中,压缩层是对输入至全局平均池化的特征图进行特征压缩,将每个通道的特征图转换成一个实数,输出得到维度数与输入的特征图的特征通道数相等的特征向量。激励层是对每个通道的特征图进行权值评估,得到与每个通道对应的权重值。加权操作是将缩放后的特征图中的各个通道的权重值分别与第四卷积层处理后得到的特征图中对应通道的二维矩阵相乘,得到加权操作后的特征图。It should be noted that in this SE-ResNet model, the compression layer performs feature compression on the feature map input to the global average pooling, converts the feature map of each channel into a real number, and outputs the feature map of the number of dimensions and the input. eigenvectors with equal number of feature channels. The excitation layer evaluates the weights of the feature maps of each channel, and obtains the weights corresponding to each channel. The weighting operation is to multiply the weight value of each channel in the scaled feature map with the two-dimensional matrix of the corresponding channel in the feature map obtained after processing by the fourth convolution layer to obtain the feature map after the weighting operation.
接着,针对管腔和内嵌支架的分割任务,采集对应的训练集,训练集包括OCT图像样本、与上述OCT图像样本对应的掩膜图像样本以及与上述OCT图像样本对应的边缘掩膜样本。其中,OCT图像样本的获取方式包括但不限于直接利用现有的OCT图像数据库中的OCT图像以及从公开的网站中获取OCT图像。Next, for the segmentation task of the lumen and the embedded stent, a corresponding training set is collected, and the training set includes OCT image samples, mask image samples corresponding to the above OCT image samples, and edge mask samples corresponding to the above OCT image samples. Wherein, the acquisition method of the OCT image sample includes, but is not limited to, directly using the OCT image in the existing OCT image database and acquiring the OCT image from a public website.
在实际应用中,与上述OCT图像样本对应的掩膜图像样本为人工标记的图像样本,用于与图像分割模型输出的掩膜图像作对比。In practical applications, the mask image samples corresponding to the above OCT image samples are manually marked image samples, which are used for comparison with the mask images output by the image segmentation model.
在其中一种可能的实施方式中,在获取到OCT图像样本后,首先对OCT图像样本进行图像预处理。示例性的,图像预处理的方法可以包括灰度转换、标准化、直方图均衡化和/或Gamma校正等。In one possible implementation manner, after the OCT image samples are acquired, image preprocessing is first performed on the OCT image samples. Exemplarily, the method of image preprocessing may include grayscale conversion, normalization, histogram equalization, and/or Gamma correction, and the like.
然后根据训练集中的OCT图像样本以及预设的损失函数对上述网络模型进行迭代训练,将上述图像分割初始模型训练得到训练后的图像分割模型。预设的损失函数用于约束OCT图像样本对应的预测图像与掩膜图像样本之间的误差,以及OCT图像样本对应的预测边缘掩膜与边缘掩膜样本之间的误差,其中,预测图像为图像分割初始模型对OCT图像样本进行处理后得到的图像,预测边缘掩膜为SE-ResNet模型对OCT图像样本进行处理后得到的图像。Then, the above-mentioned network model is iteratively trained according to the OCT image samples in the training set and the preset loss function, and the above-mentioned initial image segmentation model is trained to obtain a trained image segmentation model. The preset loss function is used to constrain the error between the predicted image corresponding to the OCT image sample and the mask image sample, and the error between the predicted edge mask corresponding to the OCT image sample and the edge mask sample, where the predicted image is The initial image segmentation model is the image obtained after processing the OCT image sample, and the predicted edge mask is the image obtained after processing the OCT image sample by the SE-ResNet model.
针对从OCT图像中分割管腔和内嵌支架的分割任务,本申请实施例中的损失函数采用组合损失函数,组合损失函数包括基于区域的Dice损失和基于边缘的Boundary损失,组合损失函数的公式参见公式(4):For the segmentation task of segmenting lumen and embedded stents from OCT images, the loss function in this embodiment of the present application adopts a combined loss function. The combined loss function includes a region-based Dice loss and an edge-based Boundary loss. The formula of the combined loss function See formula (4):
L=αLDice+βLBoundary (4)L=αL Dice +βL Boundary (4)
上述公式(4)中,LDice表示基于区域的Dice损失,LBoundary表示基于边缘的Boundary损失,α和β表示平衡系数,用于平衡基于区域的损失与基于边缘损失对输出结果的影响,LDice的计算公式参见如下公式(5),LBoundary的计算公式参见如下公式(6)。In the above formula (4), L Dice represents the region-based Dice loss, L Boundary represents the edge-based Boundary loss, α and β represent the balance coefficient, which is used to balance the impact of the region-based loss and the edge-based loss on the output results, L The calculation formula of Dice is shown in the following formula (5), and the calculation formula of L Boundary is shown in the following formula (6).
上述公式(5)中,i表示图像中的像素点,c表示像素点对应的分类,g表示该像素点的分类是否正确,表示该像素点分为某一类的概率。In the above formula (5), i represents the pixel in the image, c represents the classification corresponding to the pixel, and g represents whether the classification of the pixel is correct, Indicates the probability that the pixel is classified into a certain class.
上述公式(6)中,φG表示边界水平集,Sθ(ξ)表示SE-ResNet模型输出的概率。In the above formula (6), φG represents the boundary level set, and S θ (ξ) represents the probability of the SE-ResNet model output.
应理解,针对从OCT图像中分割管腔和内嵌支架的分割任务,使用上述组合损失函数训练后的图像分割模型能够尽可能减小图像区域损失值的同时,补充图像中的边缘信息,有效解决管腔图像在分割时出现分割损失值较大,导致图像分割精度低的问题。It should be understood that for the segmentation task of segmenting lumen and embedded stents from OCT images, the image segmentation model trained using the above-mentioned combined loss function can reduce the loss value of the image area as much as possible while supplementing the edge information in the image, effectively. Solve the problem that the segmentation loss value of the lumen image is large during segmentation, resulting in low image segmentation accuracy.
本申请实施例提供的网络模型中密集型连接U-Net模型能够融合不同深度的特征信息,由于上述组合损失函数中的Boundary损失充分考虑到了图像分割过程中的边缘损失,将SE-ResNet模型应用于图像分割模型中,能够增强边缘特征的重用和表达能力,因此,采用由密集型连接U-Net模型和SE-ResNet模型构成的网络模型对获取到的OCT图像进行分割,能够提高模型对图像的分割精度,提升管腔和内嵌支架的识别度。The densely connected U-Net model in the network model provided by the embodiment of the present application can fuse feature information of different depths. Since the Boundary loss in the above-mentioned combined loss function fully takes into account the edge loss in the image segmentation process, the SE-ResNet model is applied In the image segmentation model, the ability to reuse and express edge features can be enhanced. Therefore, using a network model composed of a densely connected U-Net model and a SE-ResNet model to segment the obtained OCT image can improve the model's ability to image images. improved segmentation accuracy and improved the identification of lumen and embedded stents.
基于针对从OCT图像中分割管腔和内嵌支架的分割任务,如图4所示为本申请提供的一种对同一张OCT图像分别采用传统的U-Net图像分割模型和本申请提供的图像分割模型进行分割的结果图,如图4中的(a)为待分割的OCT图像,如图4中的(b)是采用传统的U-Net图像分割模型对OCT图像进行分割的结果图,如图4中的(c)是采用本申请提供的图像分割模型对OCT图像进行分割的结果图,根据图4不仅验证本申请实施例提供的图像分割模型的可行性,而且从图4中可以看出,利用本申请实施例提供的图像分割模型分割后的图像更加准确,相比与现有技术,明显能够提高图像分割的精度。Based on the segmentation task of segmenting the lumen and the embedded stent from the OCT image, as shown in FIG. 4 , a traditional U-Net image segmentation model and the image provided by the present application are respectively used for the same OCT image. The result of segmentation by the segmentation model, as shown in (a) in Figure 4 is the OCT image to be segmented, and (b) in Figure 4 is the result of segmenting the OCT image by using the traditional U-Net image segmentation model, (c) in FIG. 4 is the result of segmenting the OCT image by using the image segmentation model provided by the present application. According to FIG. 4 , not only the feasibility of the image segmentation model provided by the embodiment of the present application is verified, but also from FIG. 4 . It can be seen that the image segmented by using the image segmentation model provided by the embodiment of the present application is more accurate, and compared with the prior art, the accuracy of image segmentation can be obviously improved.
根据本申请实施例提供的图像分割模型对OCT图像进行分割后得到掩膜图像,得到的掩膜图像可以应用于对新生内膜指数的检测任务。具体地,可以利用掩膜图像计算OCT图像中的新生内膜指数,然后根据新生内膜指数确定OCT图像中内嵌支架覆盖的程度,从而完成检测任务。A mask image is obtained by segmenting the OCT image according to the image segmentation model provided in the embodiment of the present application, and the obtained mask image can be applied to the task of detecting the neointimal index. Specifically, the mask image can be used to calculate the neointimal index in the OCT image, and then the degree of coverage of the embedded stent in the OCT image can be determined according to the neointimal index, thereby completing the detection task.
应理解,新生内膜指数表示OCT图像中内嵌支架边缘围成的面积占管腔边缘围成面积的比例。具体计算公式参见如下公式(7):It should be understood that the neointimal index represents the ratio of the area enclosed by the edge of the embedded stent to the area enclosed by the edge of the lumen in the OCT image. The specific calculation formula is shown in the following formula (7):
上述公式(7)中,Sstent表示OCT图像中内嵌支架覆盖的面积;Slumen表示OCT图像中管腔的横截面积。若公式(7)中的比值越大,则表示内嵌支架贴近血管壁(也称贴壁)的程度越高,内嵌支架在血管中的嵌入程度越深;反之,若公式(7)中的比值越小,则表示内嵌支架在血管中的贴壁程度越低,内嵌支架在血管中的嵌入程度越浅。In the above formula (7), S stent represents the area covered by the embedded stent in the OCT image; S lumen represents the cross-sectional area of the lumen in the OCT image. If the ratio in formula (7) is larger, it means that the degree of closeness of the embedded stent to the blood vessel wall (also called adherence) is higher, and the embedded stent is deeply embedded in the blood vessel; on the contrary, if formula (7) The smaller the ratio, the lower the degree of adherence of the embedded stent in the blood vessel, and the shallower the embedded degree of the embedded stent in the blood vessel.
在实际应用过程中,如图5所示为本申请实施例提供的一种边缘拟合图像的示意图,参见图5,对图4中(c)图所示的掩膜图像进行拟合,对应形成内嵌支架封闭边缘以及管腔封闭边缘,如图5中虚线围成的面积为内嵌支架封闭边缘,实现围成的面积为管腔封闭边缘;然后通过计算掩膜图像中内嵌支架边缘所包围的面积得到Sstent,通过计算掩膜图像中管腔边缘所围成的面积得到Slumen;最后根据上述公式(7)计算得到新生内膜指数。In the actual application process, as shown in FIG. 5, a schematic diagram of an edge fitting image provided by an embodiment of the present application is shown. Referring to FIG. 5, the mask image shown in (c) in FIG. 4 is fitted, corresponding to The closed edge of the embedded stent and the closed edge of the lumen are formed. As shown in Figure 5, the area enclosed by the dotted line is the closed edge of the embedded stent, and the area enclosed is the closed edge of the lumen. Then, by calculating the edge of the embedded stent in the mask image The enclosed area is S stent , and S lumen is obtained by calculating the area enclosed by the lumen edge in the mask image; finally, the neointimal index is calculated according to the above formula (7).
通过计算比值的方式衡量内嵌支架在血管内壁的支撑作用,避免了计算内嵌支架在血管管腔内嵌入深度的过程中,由于内嵌支架嵌入深度的人工测量尺寸不统一或过度依赖边缘单点像素而导致的计算结果不准确的问题,加快了内膜覆盖内嵌支架程度的检测速度,提高了计算结果的准确率。The support function of the embedded stent on the inner wall of the blood vessel is measured by calculating the ratio, which avoids the process of calculating the embedded depth of the embedded stent in the vascular lumen, because the manual measurement of the embedded stent embedded depth is inconsistent or overly dependent on the edge single. The problem of inaccurate calculation results caused by dot pixels speeds up the detection speed of the degree of endometrium covering the embedded stent, and improves the accuracy of the calculation results.
基于同一发明构思,作为对上述方法的实现,本申请实施例提供了一种基于OCT平台的内嵌支架新生内膜覆盖率自动评估系统,参见图6,该基于OCT平台的内嵌支架新生内膜覆盖率自动评估系统系统600包括:Based on the same inventive concept, as an implementation of the above method, an embodiment of the present application provides an OCT platform-based automatic evaluation system for the neointimal coverage of embedded stents. Referring to FIG. 6 , the OCT platform-based embedded stent neointimal coverage The film coverage automatic evaluation system system 600 includes:
获取模块601,用于获取目标管腔的OCT图像,目标管腔中设置有内嵌支架;an acquisition module 601, configured to acquire an OCT image of a target lumen, where an embedded stent is arranged in the target lumen;
处理模块602,用于将OCT图像输入预设的图像分割模型中处理,输出OCT图像对应的掩膜图像,掩膜图像用于标识管腔和内嵌支架;The processing module 602 is used to input the OCT image into a preset image segmentation model for processing, and output a mask image corresponding to the OCT image, and the mask image is used to identify the lumen and the embedded stent;
评估模块603,用于根据掩膜图像评估管腔和内嵌支架之间的新生内膜覆盖率。The evaluation module 603 is used for evaluating the neointimal coverage between the lumen and the embedded stent according to the mask image.
可选地,预设的分割模型包括第一卷积层、第一下采样模块、第二下采样模块、第一上采样模块、第二上采样模块、第三上采样模块以及第一激活层,第一卷积层、第一下采样模块、第二下采样模块、所述第一上采样模块依次连接,第一上采样模块输出的特征图与第一下采样模块输出的特征图经第一拼接操作以及第二激活函数处理后输入至第二上采样模块;第一下采样模块输出的特征图输入至第三上采样模块处理后得到的特征图、第一卷积层输出的特征图以及第二上采样模块处理后的特征图经第二拼接操作以及第一激活函数处理后输出得到掩膜图像。Optionally, the preset segmentation model includes a first convolution layer, a first downsampling module, a second downsampling module, a first upsampling module, a second upsampling module, a third upsampling module, and a first activation layer. , the first convolution layer, the first down-sampling module, the second down-sampling module, and the first up-sampling module are connected in sequence, and the feature map output by the first up-sampling module and the feature map output by the first down-sampling module are processed by the first A splicing operation and the second activation function are processed and input to the second upsampling module; the feature map output by the first downsampling module is input to the feature map processed by the third upsampling module and the feature map output by the first convolution layer And the feature map processed by the second upsampling module is processed by the second stitching operation and the first activation function and output to obtain a mask image.
可选地,第一下采样模块和第二下采样模块分别包括下采样层和第二卷积层,第一上采样模块、第二上采样模块以及第三上采样模块分别包括上采样层和第三卷积层;第一卷积层、第二卷积层和第三卷积层均采用same模式的卷积。Optionally, the first downsampling module and the second downsampling module include a downsampling layer and a second convolutional layer, respectively, and the first upsampling module, the second upsampling module, and the third upsampling module include an upsampling layer and The third convolutional layer; the first convolutional layer, the second convolutional layer and the third convolutional layer all use the same mode convolution.
可选地,图像分割模型的训练方式包括:搭建网络模型,网络模型包括图像分割初始模型和SE-ResNet模型;Optionally, the training method of the image segmentation model includes: building a network model, and the network model includes an initial image segmentation model and a SE-ResNet model;
根据预设的损失函数和训练集,对网络模型进行训练,将图像分割初始模型训练得到训练后的图像分割模型,SE-ResNet模型用于提取训练集中OCT图像样本的预测边缘掩膜。According to the preset loss function and training set, the network model is trained, and the initial image segmentation model is trained to obtain the trained image segmentation model. The SE-ResNet model is used to extract the predicted edge mask of the OCT image samples in the training set.
可选地,训练集包括OCT图像样本、每个OCT图像样本对应的掩膜图像样本以及每个OCT图像样本对应的边缘掩膜样本;损失函数用于约束OCT图像样本对应的预测图像与掩膜图像样本之间的误差,和OCT图像样本对应的预测边缘掩膜与边缘掩膜样本之间的误差,预测图像为图像分割初始模型对OCT图像样本进行处理后得到的图像,预测边缘掩膜为SE-ResNet模型对OCT图像样本进行处理后得到的图像。Optionally, the training set includes an OCT image sample, a mask image sample corresponding to each OCT image sample, and an edge mask sample corresponding to each OCT image sample; the loss function is used to constrain the predicted image and mask corresponding to the OCT image sample. The error between the image samples, and the error between the predicted edge mask corresponding to the OCT image sample and the edge mask sample, the predicted image is the image obtained after processing the OCT image sample by the initial image segmentation model, and the predicted edge mask is The image obtained after the SE-ResNet model processes the OCT image sample.
可选地,上述SE-ResNet模型包括依次连接的第四卷积层、压缩层以及激励层,Optionally, the above SE-ResNet model includes a fourth convolution layer, a compression layer and an excitation layer that are connected in sequence,
第四卷积层的输出与激励层的输出作加权操作处理,加权操作处理后的输出与第四卷积层的输入经相加操作处理后输出边缘。The output of the fourth convolutional layer and the output of the excitation layer are processed by weighting operation, and the output after the weighting operation and the input of the fourth convolutional layer are processed by addition operation, and then the edge is output.
可选地,上述激励层中的缩放系数为16。Optionally, the scaling factor in the above excitation layer is 16.
本实施例提供的基于OCT平台的支架新生内膜覆盖率自动评估系统系统600可以执行上述方法实施例,其实现原理与技术效果类似,此处不再赘述。The system 600 for automatic assessment of stent neointimal coverage based on the OCT platform provided in this embodiment can implement the above method embodiments, and the implementation principle and technical effect thereof are similar, and are not repeated here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, only the division of the above-mentioned functional units and modules is used as an example. Module completion, that is, dividing the internal structure of the device into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated in one processing unit, or each unit may exist physically alone, or two or more units may be integrated in one unit, and the above-mentioned integrated units may adopt hardware. It can also be realized in the form of software functional units. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing from each other, and are not used to limit the protection scope of the present application. For the specific working processes of the units and modules in the above-mentioned system, reference may be made to the corresponding processes in the foregoing method embodiments, which will not be repeated here.
基于同一发明构思,本申请实施例还提供了一种终端设备。图7为本申请实施例提供的终端设备的示意图,如图7所示,本实施例提供的终端设备包括:存储器701和处理器702,存储器701用于存储计算机程序703;处理器702用于在调用计算机程序时执行上述方法实施例所述的方法。或者,所述处理器702执行所述计算机程序703时实现上述各装置实施例中各模块/单元的功能,例如图6所示单元601至单元603的功能。Based on the same inventive concept, an embodiment of the present application also provides a terminal device. FIG. 7 is a schematic diagram of a terminal device provided by an embodiment of the present application. As shown in FIG. 7 , the terminal device provided by this embodiment includes: a
示例性的,所述计算机程序703可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器701中,并由所述处理器702执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述终端设备中的执行过程。Exemplarily, the
本领域技术人员可以理解,图7仅仅是终端设备的示例,并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备700还可以包括输入输出设备、网络接入设备、总线等。Those skilled in the art can understand that FIG. 7 is only an example of a terminal device, and does not constitute a limitation on the terminal device. It may include more or less components than the one shown in the figure, or combine some components, or different components, such as The terminal device 700 may further include an input and output device, a network access device, a bus, and the like.
所述处理器702可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The processor 702 may be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
所述存储器701可以是所述终端设备的内部存储单元,例如终端设备的硬盘或内存。所述存储器701也可以是所述终端设备的外部存储设备,例如所述终端设备上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器701还可以既包括所述终端设备的内部存储单元也包括外部存储设备。所述存储器701用于存储所述计算机程序以及所述终端设备所需的其它程序和数据。所述存储器701还可以用于暂时地存储已经输出或者将要输出的数据。The
本实施例提供的终端设备可以执行上述方法实施例,其实现原理与技术效果类似,此处不再赘述。The terminal device provided in this embodiment may execute the foregoing method embodiments, and the implementation principle and technical effect thereof are similar, and details are not described herein again.
本申请实施例还提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述方法实施例所述的方法。Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method described in the foregoing method embodiment is implemented.
本申请实施例还提供一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行时实现上述方法实施例所述的方法。The embodiments of the present application further provide a computer program product, when the computer program product runs on a terminal device, the terminal device executes the method described in the above method embodiments.
本申请实施例还提供一种芯片系统,包括处理器,所述处理器与存储器耦合,所述处理器执行存储器中存储的计算机程序,以实现上述方法实施例所述的方法。其中,所述芯片系统可以为单个芯片,或者多个芯片组成的芯片模组。An embodiment of the present application further provides a chip system, including a processor, where the processor is coupled to a memory, and the processor executes a computer program stored in the memory to implement the method described in the above method embodiments. Wherein, the chip system may be a single chip or a chip module composed of multiple chips.
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读存储介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random AccessMemory,RAM)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the above-mentioned integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium. When executed by a processor, the steps of each of the above method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like. The computer-readable storage medium may include at least: any entity or device capable of carrying computer program codes to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signal, telecommunication signal and software distribution medium. For example, U disk, mobile hard disk, disk or CD, etc. In some jurisdictions, under legislation and patent practice, computer readable media may not be electrical carrier signals and telecommunications signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the foregoing embodiments, the description of each embodiment has its own emphasis. For parts that are not described or described in detail in a certain embodiment, reference may be made to the relevant descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art can realize that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/equipment embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units or Components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It is to be understood that, when used in this specification and the appended claims, the term "comprising" indicates the presence of the described feature, integer, step, operation, element and/or component, but does not exclude one or more other The presence or addition of features, integers, steps, operations, elements, components and/or sets thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It will also be understood that, as used in this specification and the appended claims, the term "and/or" refers to and including any and all possible combinations of one or more of the associated listed items.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the specification of this application and the appended claims, the term "if" may be contextually interpreted as "when" or "once" or "in response to determining" or "in response to detecting ". Similarly, the phrases "if it is determined" or "if the [described condition or event] is detected" may be interpreted, depending on the context, to mean "once it is determined" or "in response to the determination" or "once the [described condition or event] is detected. ]" or "in response to detection of the [described condition or event]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of the present application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and should not be construed as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。References in this specification to "one embodiment" or "some embodiments" and the like mean that a particular feature, structure or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," "in other embodiments," etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless specifically emphasized otherwise. The terms "including", "including", "having" and their variants mean "including but not limited to" unless specifically emphasized otherwise.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: The technical solutions described in the foregoing embodiments can still be modified, or some or all of the technical features thereof can be equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application. scope.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210528451.7A CN115018768A (en) | 2022-05-16 | 2022-05-16 | An automatic assessment system for stent neointimal coverage based on OCT platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210528451.7A CN115018768A (en) | 2022-05-16 | 2022-05-16 | An automatic assessment system for stent neointimal coverage based on OCT platform |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115018768A true CN115018768A (en) | 2022-09-06 |
Family
ID=83069989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210528451.7A Pending CN115018768A (en) | 2022-05-16 | 2022-05-16 | An automatic assessment system for stent neointimal coverage based on OCT platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115018768A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094127A1 (en) * | 2008-10-14 | 2010-04-15 | Lightlab Imaging, Inc. | Methods for stent strut detection and related measurement and display using optical coherence tomography |
US20150297097A1 (en) * | 2014-01-14 | 2015-10-22 | Volcano Corporation | Vascular access evaluation and treatment |
CN110211137A (en) * | 2019-06-08 | 2019-09-06 | 西安电子科技大学 | Satellite Image Segmentation method based on residual error network and U-Net segmentation network |
US20200327664A1 (en) * | 2019-04-10 | 2020-10-15 | Case Western Reserve University | Assessment of arterial calcifications |
WO2021193018A1 (en) * | 2020-03-27 | 2021-09-30 | テルモ株式会社 | Program, information processing method, information processing device, and model generation method |
-
2022
- 2022-05-16 CN CN202210528451.7A patent/CN115018768A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100094127A1 (en) * | 2008-10-14 | 2010-04-15 | Lightlab Imaging, Inc. | Methods for stent strut detection and related measurement and display using optical coherence tomography |
US20150297097A1 (en) * | 2014-01-14 | 2015-10-22 | Volcano Corporation | Vascular access evaluation and treatment |
US20200327664A1 (en) * | 2019-04-10 | 2020-10-15 | Case Western Reserve University | Assessment of arterial calcifications |
CN110211137A (en) * | 2019-06-08 | 2019-09-06 | 西安电子科技大学 | Satellite Image Segmentation method based on residual error network and U-Net segmentation network |
WO2021193018A1 (en) * | 2020-03-27 | 2021-09-30 | テルモ株式会社 | Program, information processing method, information processing device, and model generation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325739B (en) | Method and device for detecting lung focus and training method of image detection model | |
CN107480677B (en) | Method and device for identifying interest region in three-dimensional CT image | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
WO2021082691A1 (en) | Segmentation method and apparatus for lesion area of eye oct image, and terminal device | |
CN112396605B (en) | Network training method and device, image recognition method and electronic equipment | |
US11684333B2 (en) | Medical image analyzing system and method thereof | |
CN111695463A (en) | Training method of face impurity detection model and face impurity detection method | |
CN109767448B (en) | Segmentation model training method and device | |
CN114096993B (en) | Image segmentation confidence determination | |
WO2021232320A1 (en) | Ultrasound image processing method and system, and computer readable storage medium | |
CN110880177A (en) | Image identification method and device | |
CN112614133A (en) | Three-dimensional pulmonary nodule detection model training method and device without anchor point frame | |
CN117152507B (en) | Tooth health state detection method, device, equipment and storage medium | |
CN104732520A (en) | Cardio-thoracic ratio measuring algorithm and system for chest digital image | |
CN113705595A (en) | Method, device and storage medium for predicting degree of abnormal cell metastasis | |
Taqi et al. | Skin lesion detection by android camera based on SSD-Mo-bilenet and tensorflow object detection API | |
CN111311565A (en) | Eye OCT image-based detection method and device for positioning points of optic cups and optic discs | |
CN115845248B (en) | Positioning method and device for ventricular catheter pump | |
CN111539926B (en) | Image detection method and device | |
CN113450329B (en) | Microcirculation image blood vessel branch erythrocyte flow rate calculation method and system | |
CN114820652A (en) | Method, device and medium for segmenting local quality abnormal region of mammary X-ray image | |
US20240188688A1 (en) | Mehtod and apparatus for processing foot information | |
CN117437697B (en) | Training method of prone position human body detection model, prone position human body detection method and system | |
CN117333487B (en) | Acne classification method, device, equipment and storage medium | |
CN115018768A (en) | An automatic assessment system for stent neointimal coverage based on OCT platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |