CN113724276B - Polyp image segmentation method and device - Google Patents
Polyp image segmentation method and device Download PDFInfo
- Publication number
- CN113724276B CN113724276B CN202110889919.0A CN202110889919A CN113724276B CN 113724276 B CN113724276 B CN 113724276B CN 202110889919 A CN202110889919 A CN 202110889919A CN 113724276 B CN113724276 B CN 113724276B
- Authority
- CN
- China
- Prior art keywords
- image
- polyp
- features
- feature
- shallow
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明实施例涉及图像处理技术领域,尤其是一种息肉图像的分割方法和装置。The embodiments of the present invention relate to the field of image processing technology, and in particular to a method and device for segmenting a polyp image.
背景技术Background technique
息肉容易引起癌变,尤其是多发性息肉,因此对息肉进行早期筛查和治疗非常必要。息肉分割(Polyp Segmentation)作为一种计算机视觉任务,可以自动地将图像或视频中的息肉部位分割出来,大大地降低医生的工作量,因此建立一套精确的息肉分割模型对于临床医学诊断具有重大意义。Polyps are prone to cancer, especially multiple polyps, so early screening and treatment of polyps is very necessary. Polyp segmentation, as a computer vision task, can automatically segment polyps in images or videos, greatly reducing the workload of doctors. Therefore, establishing an accurate polyp segmentation model is of great significance for clinical medical diagnosis.
目前,基于平行反向注意力网络的PraNet是最常用的现有技术。PraNet首先利用Res2Net神经网络从息肉图像中抽取不同语义级别的特征,然后利用并行的解码器对高语义级别的特征进行聚合从而获得图像的全局上下文信息,但由于高语义级别的特征损失了过多的细节信息,因此其获得的息肉分割结果相对粗糙。为了进一步挖掘息肉边界线索,PraNet利用反向注意力模块来构建息肉区域和息肉边界间的关系。通过息肉区域和息肉边界间的不断交互补充,PraNet可以获得更为精确的息肉分割预测结果。At present, PraNet based on parallel reverse attention network is the most commonly used existing technology. PraNet first uses Res2Net neural network to extract features of different semantic levels from polyp images, and then uses parallel decoders to aggregate high-semantic-level features to obtain the global context information of the image. However, since the high-semantic-level features lose too much detail information, the polyp segmentation results obtained are relatively rough. In order to further explore the polyp boundary clues, PraNet uses the reverse attention module to construct the relationship between the polyp area and the polyp boundary. Through the continuous interaction and complementation between the polyp area and the polyp boundary, PraNet can obtain more accurate polyp segmentation prediction results.
虽然PraNet可以获得相对准确的结果,但其存在两个重要缺陷:(1)对小息肉目标分割结果较差。因为小息肉在高语义级别的特征中损失信息过多,难以直接恢复;另外,小息肉边界标注存在较大误差,对最终分割结果影响较大;(2)忽视了数据集中存在的颜色偏差。通常,不同条件下采集的息肉图像的颜色存在较大差异,这种差异会干扰息肉分割模型的训练,特别是当训练图像较少时,模型很容易过拟合到息肉颜色上,导致模型在实际应用场景中存在明显泛化能力下降。Although PraNet can obtain relatively accurate results, it has two important defects: (1) The segmentation results of small polyps are poor. This is because small polyps lose too much information in high-level semantic features and are difficult to recover directly; in addition, there are large errors in the annotation of small polyp boundaries, which has a great impact on the final segmentation results; (2) It ignores the color bias in the dataset. Generally, there are large differences in the colors of polyp images collected under different conditions. This difference will interfere with the training of the polyp segmentation model. Especially when there are fewer training images, the model can easily overfit to the polyp color, resulting in a significant decrease in the generalization ability of the model in actual application scenarios.
发明内容Summary of the invention
为解决上述技术问题,本发明创造的实施例提供一种息肉图像的分割方法,包括:In order to solve the above technical problems, an embodiment of the present invention provides a polyp image segmentation method, comprising:
获取待输入的息肉图像;Acquire a polyp image to be input;
从预设的训练集中选取与所述息肉图像颜色不同的参考图像,并将所述参考图像与所述息肉图像的颜色进行交换;Selecting a reference image having a different color from the polyp image from a preset training set, and exchanging the colors of the reference image and the polyp image;
从颜色交换后的息肉图像中提取浅层特征和深层特征,并利用浅层注意力模型抑制所述浅层特征的背景噪声,以及将所述浅层特征和所述深层特征进行融合;Extracting shallow features and deep features from the color-swapped polyp image, suppressing background noise of the shallow features using a shallow attention model, and fusing the shallow features with the deep features;
采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像。The probability correction strategy model is used to re-balance the predicted response values of the fused features to obtain a polyp feature image with clear edges.
进一步地,所述将所述参考图像与所述息肉图像的颜色进行交换,包括:Furthermore, exchanging the colors of the reference image and the polyp image includes:
将所述息肉图像X1以及参考图像X2的颜色由RGB颜色空间转换为LAB颜色空间,得到所述息肉图像X1以及所述参考图像X2在LAB颜色空间中的颜色值L1和L2;Convert the colors of the polyp image X1 and the reference image X2 from the RGB color space to the LAB color space to obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
计算所述息肉图像X1在LAB颜色空间中通道的均值和标准差以及所述参考图像X2在LAB颜色空间中通道的均值和标准差;Calculate the mean and standard deviation of the channels of the polyp image X1 in the LAB color space and the mean and standard deviation of the channels of the reference image X2 in the LAB color space;
利用预设的颜色转换公式得到RGB颜色空间中息肉图像Y1的颜色值,以及RGB颜色空间中参考图像Y2的颜色值。The color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space are obtained using a preset color conversion formula.
进一步地,所述利用浅层注意力模型抑制所述浅层特征的背景噪声,包括:Furthermore, the method of suppressing the background noise of the shallow features by using the shallow attention model includes:
将所述深层特征通过双线性差值上采样使得采样后的深层特征与浅层特征的分辨率相同;Upsampling the deep features by bilinear interpolation so that the resolution of the sampled deep features is the same as that of the shallow features;
从采样的深层特征中选取大于0的元素确定为所述浅层特征的注意力图,得到待融合的深层特征;Selecting elements greater than 0 from the sampled deep features as the attention map of the shallow features to obtain the deep features to be fused;
将所述待融合的深层特征与所述浅层特征逐元素相乘得到抑制背景噪声后的浅层特征。The deep features to be fused are multiplied element by element with the shallow features to obtain the shallow features after suppressing the background noise.
进一步地,所述将所述浅层特征和所述深层特征进行融合,包括:Furthermore, the fusing of the shallow features and the deep features includes:
提取采用卷积神经网络对抑制背景噪声后的浅层特征进行处理时的最后三种尺度的第一特征、第二特征和第三特征;Extracting the first, second and third features of the last three scales when the shallow features after suppressing background noise are processed by a convolutional neural network;
将所述第一特征和所述第二进行融合得到第一融合特征;Fusing the first feature and the second feature to obtain a first fused feature;
将所述第二特征和所述第三特征融合得到第二融合特征;Fusing the second feature and the third feature to obtain a second fused feature;
将所述第一融合特征和所述第二融合特征按照通道进行拼接得到最终的融合特征。The first fusion feature and the second fusion feature are spliced according to channels to obtain a final fusion feature.
进一步地,采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像,包括:Furthermore, the probability correction strategy model is used to rebalance the predicted response values of the fused features to obtain a polyp feature image with clear edges, including:
统计融合后的息肉特征图像中特征响应值大于0的像素个数得到第一像素值;Counting the number of pixels whose characteristic response values are greater than 0 in the fused polyp characteristic image to obtain a first pixel value;
统计融合后的息肉特征图像中特征响应值小于0的像素个数得到第二像素值;Counting the number of pixels whose characteristic response values are less than 0 in the fused polyp characteristic image to obtain a second pixel value;
将所述第一像素值和所述第二像素值进行归一化处理,并将所述息肉特征图像中大于0的特征响应值除以归一化后的第一像素值,将所述息肉特征图像中小于0的特征响应值除以归一化后的第二像素值得到矫正后的息肉特征图像。The first pixel value and the second pixel value are normalized, and the characteristic response value greater than 0 in the polyp characteristic image is divided by the normalized first pixel value, and the characteristic response value less than 0 in the polyp characteristic image is divided by the normalized second pixel value to obtain a corrected polyp characteristic image.
一种息肉图像的分割装置,包括:A polyp image segmentation device, comprising:
获取模块,用于获取待输入的息肉图像;An acquisition module, used for acquiring a polyp image to be input;
处理模块,用于从预设的训练集中选取与所述息肉图像颜色不同的参考图像,并将所述参考图像与所述息肉图像的颜色进行交换;A processing module, configured to select a reference image having a different color from the polyp image from a preset training set, and exchange the colors of the reference image with those of the polyp image;
所述处理模块,用于从颜色交换后的息肉图像中提取浅层特征和深层特征,并利用浅层注意力模型抑制所述浅层特征的背景噪声,以及将所述浅层特征和所述深层特征进行融合;The processing module is used to extract shallow features and deep features from the polyp image after color exchange, suppress background noise of the shallow features by using a shallow attention model, and fuse the shallow features with the deep features;
执行模块,用于采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像。The execution module is used to use the probability correction strategy model to re-balance the predicted response values of the fused features to obtain a polyp feature image with clear edges.
进一步地,所述将处理模块包括:Furthermore, the processing module includes:
第一处理子模块,用于将所述息肉图像X1以及参考图像X2的颜色由RGB颜色空间转换为LAB颜色空间,得到所述息肉图像X1以及所述参考图像X2在LAB颜色空间中的颜色值L1和L2;A first processing submodule is used to convert the colors of the polyp image X1 and the reference image X2 from the RGB color space to the LAB color space to obtain color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
第二处理子模块,用于计算所述息肉图像X1在LAB颜色空间中通道的均值和标准差以及所述参考图像X2在LAB颜色空间中通道的均值和标准差;A second processing submodule, used for calculating the mean and standard deviation of the channels of the polyp image X1 in the LAB color space and the mean and standard deviation of the channels of the reference image X2 in the LAB color space;
第三处理子模块,用于利用预设的颜色转换公式得到RGB颜色空间中息肉图像Y1的颜色值,以及RGB颜色空间中参考图像Y2的颜色值。The third processing submodule is used to obtain the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space by using a preset color conversion formula.
进一步地,所述处理模块包括:Furthermore, the processing module includes:
第四处理子模块,用于将所述深层特征通过双线性差值上采样使得采样后的深层特征与浅层特征的分辨率相同;A fourth processing submodule, configured to upsample the deep features by bilinear interpolation so that the resolution of the sampled deep features is the same as that of the shallow features;
第一获取子模块,用于从采样的深层特征中选取大于0的元素确定为所述浅层特征的注意力图,得到待融合的深层特征;The first acquisition submodule is used to select elements greater than 0 from the sampled deep features to determine as the attention map of the shallow features, so as to obtain the deep features to be fused;
第一执行子模块,用于将所述待融合的深层特征与所述浅层特征逐元素相乘得到抑制背景噪声后的浅层特征。The first execution submodule is used to multiply the deep features to be fused and the shallow features element by element to obtain the shallow features after suppressing background noise.
进一步地,所述处理模块包括:Furthermore, the processing module includes:
第二获取子模块,用于提取采用卷积神经网络对抑制背景噪声后的浅层特征进行处理时的最后三种尺度的第一特征、第二特征和第三特征;The second acquisition submodule is used to extract the first feature, the second feature and the third feature of the last three scales when the shallow features after suppressing the background noise are processed by the convolutional neural network;
第五处理子模块,用于将所述第一特征和所述第二进行融合得到第一融合特征;a fifth processing submodule, configured to fuse the first feature and the second feature to obtain a first fused feature;
第六处理子模块,用于将所述第二特征和所述第三特征融合得到第二融合特征;A sixth processing submodule, configured to fuse the second feature and the third feature to obtain a second fused feature;
第二执行子模块,用于将所述第一融合特征和所述第二融合特征按照通道进行拼接得到最终的融合特征。The second execution submodule is used to splice the first fusion feature and the second fusion feature according to channels to obtain a final fusion feature.
进一步地,所述执行模块包括:Furthermore, the execution module includes:
第三获取子模块,用于统计融合后的息肉特征图像中特征响应值大于0的像素个数得到第一像素值;The third acquisition submodule is used to count the number of pixels whose characteristic response values are greater than 0 in the fused polyp characteristic image to obtain a first pixel value;
第四获取子模块,用于统计融合后的息肉特征图像中特征响应值小于0的像素个数得到第二像素值;A fourth acquisition submodule is used to count the number of pixels whose characteristic response values are less than 0 in the fused polyp characteristic image to obtain a second pixel value;
第三执行子模块,用于将所述第一像素值和所述第二像素值进行归一化处理,并将所述息肉特征图像中大于0的特征响应值除以归一化后的第一像素值,将所述息肉特征图像中小于0的特征响应值除以归一化后的第二像素值得到矫正后的息肉特征图像。The third execution submodule is used to normalize the first pixel value and the second pixel value, and divide the characteristic response value greater than 0 in the polyp characteristic image by the normalized first pixel value, and divide the characteristic response value less than 0 in the polyp characteristic image by the normalized second pixel value to obtain a corrected polyp characteristic image.
本发明实施例的有益效果是:The beneficial effects of the embodiments of the present invention are:
(1)针对小息肉目标分割不准确的问题,本发明中的浅层注意力模块(SAM)可以加强模型对神经网络浅层特征的抽取和利用能力,因为浅层特征为小息肉保留更多的细节特征。不同于传统直接通过加法或拼接等操作融合多层特征的方法,SAM利用深层特征作为辅助,通过注意力机制引导去除浅层特征中的背景噪声,从而大大提升了浅层特征的可用性。此外,小息肉图像的前景和背景像素分布不平衡,对此,通过概率矫正策略(PCS),在模型推理阶段可以根据预测结果动态自适应地对其响应值进行矫正,从而优化分割目标的边缘并降低前背景分布不平衡的影响。(1) To address the problem of inaccurate segmentation of small polyp targets, the shallow attention module (SAM) in the present invention can enhance the model's ability to extract and utilize shallow features of the neural network, because shallow features retain more detailed features for small polyps. Unlike the traditional method of directly fusing multiple layers of features through operations such as addition or splicing, SAM uses deep features as an auxiliary and guides the removal of background noise in shallow features through the attention mechanism, thereby greatly improving the usability of shallow features. In addition, the foreground and background pixel distributions of small polyp images are unbalanced. To this end, the probability correction strategy (PCS) can be used to dynamically and adaptively correct its response value according to the prediction results during the model inference stage, thereby optimizing the edge of the segmented target and reducing the impact of the imbalanced foreground and background distribution.
(2)针对数据集颜色偏差问题,本发明提出的一种颜色交换(CE)操作以消除颜色偏差对模型训练的影响。具体来说,通过CE不同图像的颜色可以互相迁移,同一图像的颜色可以出现不同的变化,从而实现了图像颜色和图像内容的解耦,因此模型在训练中可以专注于图像内容本身而不会被其颜色所干扰。大量的定量和定性的实验表明,本发明所提出的SANet模型可以精确高效地从图像中分割出息肉部位,并在各种复杂的实际场景中具有更好的泛化性。(2) In response to the color deviation problem of the dataset, the present invention proposes a color exchange (CE) operation to eliminate the impact of color deviation on model training. Specifically, through CE, the colors of different images can be transferred to each other, and the colors of the same image can change differently, thereby achieving the decoupling of image color and image content. Therefore, the model can focus on the image content itself during training without being disturbed by its color. A large number of quantitative and qualitative experiments show that the SANet model proposed in the present invention can accurately and efficiently segment polyp parts from images and has better generalization in various complex practical scenarios.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the following briefly introduces the drawings required for use in the description of the embodiments. Obviously, the drawings described below are only some embodiments of the present invention. For those skilled in the art, other drawings can be obtained based on these drawings without creative work.
图1为本发明实施例提供的息肉图像分割方法的流程示意图;FIG1 is a schematic flow chart of a polyp image segmentation method provided by an embodiment of the present invention;
图2为本发明实施例提供的效果比对示意图;FIG2 is a schematic diagram of effect comparison provided by an embodiment of the present invention;
图3为本发明实施例提供的另一效果比对示意图;FIG3 is another schematic diagram of effect comparison provided by an embodiment of the present invention;
图4为本发明实施例提供的息肉图像分割装置的结构示意图;FIG4 is a schematic diagram of the structure of a polyp image segmentation device provided by an embodiment of the present invention;
图5为本发明实施例提供的计算机设备基本结构框图。FIG5 is a basic structural block diagram of a computer device provided in an embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。In order to enable those skilled in the art to better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention.
在本发明的说明书和权利要求书及上述附图中的描述的一些流程中,包含了按照特定顺序出现的多个操作,但是应该清楚了解,这些操作可以不按照其在本文中出现的顺序来执行或并行执行,操作的序号如101、102等,仅仅是用于区分开各个不同的操作,序号本身不代表任何的执行顺序。另外,这些流程可以包括更多或更少的操作,并且这些操作可以按顺序执行或并行执行。需要说明的是,本文中的“第一”、“第二”等描述,是用于区分不同的消息、设备、模块等,不代表先后顺序,也不限定“第一”和“第二”是不同的类型。In some of the processes described in the specification and claims of the present invention and the above-mentioned figures, multiple operations that appear in a specific order are included, but it should be clearly understood that these operations may not be executed in the order in which they appear in this article or executed in parallel. The serial numbers of the operations, such as 101, 102, etc., are only used to distinguish between different operations, and the serial numbers themselves do not represent any execution order. In addition, these processes may include more or fewer operations, and these operations may be executed in sequence or in parallel. It should be noted that the descriptions of "first", "second", etc. in this article are used to distinguish different messages, devices, modules, etc., do not represent the order of precedence, and do not limit the "first" and "second" to be different types.
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative work are within the scope of protection of the present invention.
请参照图1,图1为本发明实施例提供一种息肉图像的分割方法,包括:Please refer to FIG. 1 , which is a polyp image segmentation method provided by an embodiment of the present invention, including:
S1100、获取待输入的息肉图像;S1100, obtaining a polyp image to be input;
S1200、从预设的训练集中选取与所述息肉图像颜色不同的参考图像,并将所述参考图像与所述息肉图像的颜色进行交换;S1200, selecting a reference image having a different color from the polyp image from a preset training set, and exchanging the colors of the reference image and the polyp image;
S1300、从颜色交换后的息肉图像中提取浅层特征和深层特征,并利用浅层注意力模型抑制所述浅层特征的背景噪声,以及将所述浅层特征和所述深层特征进行融合;S1300, extracting shallow features and deep features from the color-swapped polyp image, suppressing background noise of the shallow features using a shallow attention model, and fusing the shallow features with the deep features;
S1400、采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像。S1400. A probability correction strategy model is used to rebalance the predicted response values of the fused features to obtain a polyp feature image with clear edges.
本发明中的浅层注意力模块(SAM)可以加强模型对神经网络浅层特征的抽取和利用能力,因为浅层特征为小息肉保留更多的细节特征。不同于传统直接通过加法或拼接等操作融合多层特征的方法,SAM利用深层特征作为辅助,通过注意力机制引导去除浅层特征中的背景噪声,从而大大提升了浅层特征的可用性。此外,小息肉图像的前景和背景像素分布不平衡,对此,通过概率矫正策略(PCS),在模型推理阶段可以根据预测结果动态自适应地对其响应值进行矫正,从而优化分割目标的边缘并降低前背景分布不平衡的影响。此外,针对数据集颜色偏差问题,本发明提出的一种颜色交换(CE)操作以消除颜色偏差对模型训练的影响。具体来说,通过CE不同图像的颜色可以互相迁移,同一图像的颜色可以出现不同的变化,从而实现了图像颜色和图像内容的解耦,因此模型在训练中可以专注于图像内容本身而不会被其颜色所干扰。大量的定量和定性的实验表明,本发明所提出的SANet模型可以精确高效地从图像中分割出息肉部位,并在各种复杂的实际场景中具有更好的泛化性。The shallow attention module (SAM) in the present invention can enhance the model's ability to extract and utilize shallow features of neural networks, because shallow features retain more detailed features for small polyps. Different from the traditional method of directly fusing multiple layers of features through operations such as addition or splicing, SAM uses deep features as an auxiliary and guides the removal of background noise in shallow features through the attention mechanism, thereby greatly improving the availability of shallow features. In addition, the foreground and background pixel distribution of small polyp images is unbalanced. To this end, through the probability correction strategy (PCS), the response value can be dynamically and adaptively corrected according to the prediction results during the model reasoning stage, thereby optimizing the edge of the segmented target and reducing the impact of the imbalanced distribution of the foreground and background. In addition, in response to the color deviation problem of the data set, a color exchange (CE) operation proposed in the present invention is used to eliminate the impact of color deviation on model training. Specifically, through CE, the colors of different images can be migrated to each other, and the colors of the same image can change differently, thereby achieving the decoupling of image color and image content, so that the model can focus on the image content itself during training without being disturbed by its color. A large number of quantitative and qualitative experiments show that the SANet model proposed in the present invention can accurately and efficiently segment polyp parts from images and has better generalization in various complex practical scenarios.
本发明实施例包含三个模型CE、SAM和PCS,其中,CE用在数据增广阶段可将不同图像的颜色迁移到输入图像上;SAM用在特征融合阶段可充分发挥浅层特征的潜能;PCS用在模型推理阶段可优化调整预测结果。The embodiment of the present invention includes three models CE, SAM and PCS, among which CE is used in the data augmentation stage to migrate the colors of different images to the input image; SAM is used in the feature fusion stage to give full play to the potential of shallow features; PCS is used in the model inference stage to optimize and adjust the prediction results.
具体地,颜色交换操作直接作用于输入图像,使得同一输入图像在模型训练过程中可以呈现不同的颜色风格,如图2所示。具体来讲,对于任意一张输入图像,随机从训练集中挑选一张颜色不同的图像作为参考并将其颜色迁移到该输入图像上,由于每次挑选的参考图像具有随机性,同一输入图像可以呈现不同颜色风格但对应标签保持不变,因此模型在训练中会重点关注图像内容而不会被图像颜色所影响。其中,将所述参考图像与所述息肉图像的颜色进行交换,包括:Specifically, the color swap operation directly acts on the input image, so that the same input image can present different color styles during the model training process, as shown in Figure 2. Specifically, for any input image, a different color image is randomly selected from the training set as a reference and its color is transferred to the input image. Since the reference image selected each time is random, the same input image can present different color styles but the corresponding label remains unchanged. Therefore, the model will focus on the image content during training and will not be affected by the image color. Among them, exchanging the colors of the reference image and the polyp image includes:
步骤一、将所述息肉图像X1以及参考图像X2的颜色由RGB颜色空间转换为LAB颜色空间,得到所述息肉图像X1以及所述参考图像X2在LAB颜色空间中的颜色值L1和L2;Step 1: Convert the colors of the polyp image X1 and the reference image X2 from the RGB color space to the LAB color space to obtain the color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space;
步骤二、计算所述息肉图像X1在LAB颜色空间中通道的均值和标准差以及所述参考图像X2在LAB颜色空间中通道的均值和标准差;Step 2: Calculate the mean and standard deviation of the channels of the polyp image X1 in the LAB color space and the mean and standard deviation of the channels of the reference image X2 in the LAB color space;
步骤三、利用预设的颜色转换公式得到RGB颜色空间中息肉图像Y1的颜色值,以及RGB颜色空间中参考图像Y2的颜色值。Step 3: Use a preset color conversion formula to obtain the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space.
本发明实施例中,小息肉图像在特征降采样中面临着严重的信息丢失问题,因此充分利用包含丰富细节的浅层特征对小息肉目标分割具有重要意义,但由于感受野限制,这些特征包含了大量的背景噪声。为此本发明提出SAM利用深层特征对浅层特征进行背景噪声抑制,可以充分提升浅层特征的易用性,并促进模型对小息肉目标的分割效果。具体地,利用浅层注意力模型抑制所述浅层特征的背景噪声,包括:In the embodiment of the present invention, small polyp images face serious information loss problems in feature downsampling. Therefore, making full use of shallow features containing rich details is of great significance for small polyp target segmentation. However, due to the limitation of the receptive field, these features contain a large amount of background noise. For this reason, the present invention proposes that SAM uses deep features to suppress background noise of shallow features, which can fully improve the ease of use of shallow features and promote the segmentation effect of the model on small polyp targets. Specifically, the background noise of the shallow features is suppressed by using a shallow attention model, including:
步骤一、将所述深层特征通过双线性差值上采样使得采样后的深层特征与浅层特征的分辨率相同;Step 1: upsampling the deep features by bilinear interpolation so that the resolution of the sampled deep features is the same as that of the shallow features;
步骤二、从采样的深层特征中选取大于0的元素确定为所述浅层特征的注意力图,得到待融合的深层特征;Step 2: Select elements greater than 0 from the sampled deep features to determine them as the attention map of the shallow features, and obtain the deep features to be fused;
步骤三、将所述待融合的深层特征与所述浅层特征逐元素相乘得到抑制背景噪声后的浅层特征。Step 3: multiply the deep features to be fused and the shallow features element by element to obtain the shallow features after suppressing the background noise.
本发明的一个实施例,将通过双线性插值上采样到和相同的分辨率大小,即将小于0的元素置为0,作为的注意力图,即;将和逐元素相乘从而抑制背景噪声,即。In one embodiment of the present invention, upsample to the same resolution size as and through bilinear interpolation, that is, set elements less than 0 to 0, as the attention map of , that is; and multiply and element-by-element to suppress background noise, that is.
本发明实施例中,SAM可以将深层和浅层特征有效融合在一起。其中,将所述浅层特征和所述深层特征进行融合,包括:In the embodiment of the present invention, SAM can effectively fuse the deep and shallow features together. The fusion of the shallow features and the deep features includes:
提取采用卷积神经网络对抑制背景噪声后的浅层特征进行处理时的最后三种尺度的第一特征、第二特征和第三特征;Extracting the first, second and third features of the last three scales when the shallow features after suppressing background noise are processed by a convolutional neural network;
将所述第一特征和所述第二进行融合得到第一融合特征;Fusing the first feature and the second feature to obtain a first fused feature;
将所述第二特征和所述第三特征融合得到第二融合特征;Fusing the second feature and the third feature to obtain a second fused feature;
将所述第一融合特征和所述第二融合特征按照通道进行拼接得到最终的融合特征。The first fusion feature and the second fusion feature are spliced according to channels to obtain a final fusion feature.
本发明的一个实施例,在SANet模型中,我们将Res2Net的stage3、stage4和stage5所输出的特征(分别记为f3,f4,f5)做融合以便降低模型计算量。基于SAM,将被融合到一起以充分发挥各尺度特征的优点。将融合在一起,即得;将融合在一起,即得;将新得的按通道拼接起来,即得最终的融合特征。In one embodiment of the present invention, in the SANet model, we fuse the features output by stage3, stage4 and stage5 of Res2Net (respectively denoted as f3, f4, f5) to reduce the amount of model calculation. Based on SAM, they will be fused together to give full play to the advantages of each scale feature. Fusion will be obtained; fusion will be obtained; splicing the newly obtained by channel will be obtained to obtain the final fusion feature.
本发明实施例中,小息肉图像存在严重的前背景像素分布不均的现象。负样本(背景像素)在模型训练过程中占据主导地位,这种先验的偏差导致模型更倾向于给正样本(前景像素)更低的响应值(logit),从而导致目标边缘分割效果较差。为矫正这种不均衡,本发明在模型推理阶段使用PCS对预测响应值进行再均衡,其中,采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像,包括:In the embodiment of the present invention, there is a serious phenomenon of uneven distribution of foreground and background pixels in the small polyp image. Negative samples (background pixels) dominate the model training process. This prior bias causes the model to be more inclined to give positive samples (foreground pixels) lower response values (logit), resulting in poor target edge segmentation. In order to correct this imbalance, the present invention uses PCS to rebalance the predicted response values in the model inference stage, wherein the probability correction strategy model is used to rebalance the predicted response values of the fused features to obtain a polyp feature image with clear edges, including:
统计融合后的息肉特征图像中特征响应值大于0的像素个数得到第一像素值;Counting the number of pixels whose characteristic response values are greater than 0 in the fused polyp characteristic image to obtain a first pixel value;
统计融合后的息肉特征图像中特征响应值小于0的像素个数得到第二像素值;Counting the number of pixels whose characteristic response values are less than 0 in the fused polyp characteristic image to obtain a second pixel value;
将所述第一像素值和所述第二像素值进行归一化处理,并将所述息肉特征图像中大于0的特征响应值除以归一化后的第一像素值,将所述息肉特征图像中小于0的特征响应值除以归一化后的第二像素值得到矫正后的息肉特征图像。The first pixel value and the second pixel value are normalized, and the characteristic response value greater than 0 in the polyp characteristic image is divided by the normalized first pixel value, and the characteristic response value less than 0 in the polyp characteristic image is divided by the normalized second pixel value to obtain a corrected polyp characteristic image.
本发明的一个实施例,统计响应值大于0(logit>0)的像素个数,可得;统计响应值小于0(logit<0)的像素个数,可得;对进行归一化,即;将logit>0的响应值除以,将logit<0的响应值除以最后得到矫正后的息肉特征图像。经过PCS之后,正负样本数量对预测结果造成的偏差被消除,目标边缘部分可以获得更清晰的预测结果,如图2展示了使用PCS所得结果的部分细节。In one embodiment of the present invention, the number of pixels with a response value greater than 0 (logit>0) is counted to obtain; the number of pixels with a response value less than 0 (logit<0) is counted to obtain; normalization is performed on, that is, the response value of logit>0 is divided by, and the response value of logit<0 is divided by finally obtaining the corrected polyp characteristic image. After PCS, the deviation of the prediction result caused by the number of positive and negative samples is eliminated, and a clearer prediction result can be obtained for the edge of the target. Figure 2 shows some details of the results obtained using PCS.
表1.不同模型在数据集上的定量结果Table 1. Quantitative results of different models on the dataset
本发明的一个实施例,表1展示不同模型在Kvasir,CVC-ClinicDB,CVC-ColonDB,EndoScene,ETIS等5个数据集上的定量结果,可以看到本发明在所有数据集上均取得了最高的得分。图3展示了不同算法在具体图像上的定性实验结果,可以看到本发明比之前的模型可以获得更完整更清晰的息肉区域。综合上述实验,本发明可以更好的去除数据集中存在的偏差和背景噪声,从而在息肉分割方面有着优秀的表现。In one embodiment of the present invention, Table 1 shows the quantitative results of different models on five datasets, namely, Kvasir, CVC-ClinicDB, CVC-ColonDB, EndoScene, and ETIS. It can be seen that the present invention has achieved the highest score on all datasets. FIG3 shows the qualitative experimental results of different algorithms on specific images. It can be seen that the present invention can obtain a more complete and clearer polyp area than the previous model. Based on the above experiments, the present invention can better remove the deviation and background noise in the dataset, thereby having excellent performance in polyp segmentation.
如图4所示,为了解决上述问题,本发明实施例还提供一种息肉图像分割装置,包括:取模块2100、处理模块2200和执行模块2300,其中,获取模块2100,用于获取待输入的息肉图像;处理模块2200,用于从预设的训练集中选取与所述息肉图像颜色不同的参考图像,并将所述参考图像与所述息肉图像的颜色进行交换;处理模块2200,用于从颜色交换后的息肉图像中提取浅层特征和深层特征,并利用浅层注意力模型抑制所述浅层特征的背景噪声,以及将所述浅层特征和所述深层特征进行融合;执行模块2300,用于采用概率矫正策略模型对融合后的特征进行预测响应值再均衡处理,得到边缘清晰的息肉特征图像。As shown in FIG4 , in order to solve the above problems, an embodiment of the present invention further provides a polyp image segmentation device, comprising: an acquisition module 2100, a processing module 2200 and an execution module 2300, wherein the acquisition module 2100 is used to acquire a polyp image to be input; the processing module 2200 is used to select a reference image having a different color from the polyp image from a preset training set, and exchange the color of the reference image with that of the polyp image; the processing module 2200 is used to extract shallow features and deep features from the polyp image after the color exchange, and use a shallow attention model to suppress the background noise of the shallow features, and fuse the shallow features with the deep features; the execution module 2300 is used to use a probability correction strategy model to perform predicted response value re-equalization processing on the fused features to obtain a polyp feature image with clear edges.
在一些实施方式中,所述处理模块包括:第一处理子模块,用于将所述息肉图像X1以及参考图像X2的颜色由RGB颜色空间转换为LAB颜色空间,得到所述息肉图像X1以及所述参考图像X2在LAB颜色空间中的颜色值L1和L2;第二处理子模块,用于计算所述息肉图像X1在LAB颜色空间中通道的均值和标准差以及所述参考图像X2在LAB颜色空间中通道的均值和标准差;第三处理子模块,用于利用预设的颜色转换公式得到RGB颜色空间中息肉图像Y1的颜色值,以及RGB颜色空间中参考图像Y2的颜色值。In some embodiments, the processing module includes: a first processing submodule, used to convert the colors of the polyp image X1 and the reference image X2 from the RGB color space to the LAB color space, and obtain the color values L1 and L2 of the polyp image X1 and the reference image X2 in the LAB color space; a second processing submodule, used to calculate the mean and standard deviation of the channels of the polyp image X1 in the LAB color space and the mean and standard deviation of the channels of the reference image X2 in the LAB color space; a third processing submodule, used to use a preset color conversion formula to obtain the color value of the polyp image Y1 in the RGB color space and the color value of the reference image Y2 in the RGB color space.
在一些实施方式中,所述处理模块包括:第四处理子模块,用于将所述深层特征通过双线性差值上采样使得采样后的深层特征与浅层特征的分辨率相同;第一获取子模块,用于从采样的深层特征中选取大于0的元素确定为所述浅层特征的注意力图,得到待融合的深层特征;第一执行子模块,用于将所述待融合的深层特征与所述浅层特征逐元素相乘得到抑制背景噪声后的浅层特征。In some embodiments, the processing module includes: a fourth processing sub-module, used to upsample the deep features through bilinear difference so that the resolution of the sampled deep features is the same as that of the shallow features; a first acquisition sub-module, used to select elements greater than 0 from the sampled deep features as the attention map of the shallow features, and obtain the deep features to be fused; a first execution sub-module, used to multiply the deep features to be fused and the shallow features element by element to obtain the shallow features after suppressing background noise.
在一些实施方式中,所述处理模块包括:第二获取子模块,用于提取采用卷积神经网络对抑制背景噪声后的浅层特征进行处理时的最后三种尺度的第一特征、第二特征和第三特征;第五处理子模块,用于将所述第一特征和所述第二进行融合得到第一融合特征;第六处理子模块,用于将所述第二特征和所述第三特征融合得到第二融合特征;第二执行子模块,用于将所述第一融合特征和所述第二融合特征按照通道进行拼接得到最终的融合特征。In some embodiments, the processing module includes: a second acquisition submodule, used to extract the first feature, the second feature and the third feature of the last three scales when the shallow features after suppressing background noise are processed using a convolutional neural network; a fifth processing submodule, used to fuse the first feature and the second to obtain a first fused feature; a sixth processing submodule, used to fuse the second feature and the third feature to obtain a second fused feature; and a second execution submodule, used to splice the first fused feature and the second fused feature according to channels to obtain a final fused feature.
在一些实施方式中,所述执行模块包括:第三获取子模块,用于统计融合后的息肉特征图像中特征响应值大于0的像素个数得到第一像素值;第四获取子模块,用于统计融合后的息肉特征图像中特征响应值小于0的像素个数得到第二像素值;第三执行子模块,用于将所述第一像素值和所述第二像素值进行归一化处理,并将所述息肉特征图像中大于0的特征响应值除以归一化后的第一像素值,将所述息肉特征图像中小于0的特征响应值除以归一化后的第二像素值得到矫正后的息肉特征图像。In some embodiments, the execution module includes: a third acquisition submodule, used to count the number of pixels with characteristic response values greater than 0 in the fused polyp characteristic image to obtain a first pixel value; a fourth acquisition submodule, used to count the number of pixels with characteristic response values less than 0 in the fused polyp characteristic image to obtain a second pixel value; a third execution submodule, used to normalize the first pixel value and the second pixel value, and divide the characteristic response values greater than 0 in the polyp characteristic image by the normalized first pixel value, and divide the characteristic response values less than 0 in the polyp characteristic image by the normalized second pixel value to obtain a corrected polyp characteristic image.
为解决上述技术问题,本发明实施例还提供计算机设备。具体请参阅图5,图5为本实施例计算机设备基本结构框图。To solve the above technical problems, the embodiment of the present invention further provides a computer device. Please refer to FIG5 for details, which is a basic structural block diagram of the computer device of the present embodiment.
如图5所示,计算机设备的内部结构示意图。如图5所示,该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、存储器和网络接口。其中,该计算机设备的非易失性存储介质存储有操作系统、数据库和计算机可读指令,数据库中可存储有控件信息序列,该计算机可读指令被处理器执行时,可使得处理器实现一种图像处理方法。该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器中可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行一种图像处理方法。该计算机设备的网络接口用于与终端连接通信。本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。As shown in Figure 5, a schematic diagram of the internal structure of a computer device. As shown in Figure 5, the computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected via a system bus. Among them, the non-volatile storage medium of the computer device stores an operating system, a database, and a computer-readable instruction, and a control information sequence may be stored in the database. When the computer-readable instruction is executed by the processor, the processor can implement an image processing method. The processor of the computer device is used to provide computing and control capabilities to support the operation of the entire computer device. The memory of the computer device may store computer-readable instructions, and when the computer-readable instruction is executed by the processor, the processor can execute an image processing method. The network interface of the computer device is used to connect and communicate with the terminal. It can be understood by those skilled in the art that the structure shown in Figure 5 is only a block diagram of a partial structure related to the present application scheme, and does not constitute a limitation on the computer device to which the present application scheme is applied. The specific computer device may include more or fewer components than shown in the figure, or combine certain components, or have different component arrangements.
本实施方式中处理器用于执行图4中获取模块2100、处理模块2200和执行模块2300的具体内容,存储器存储有执行上述模块所需的程序代码和各类数据。网络接口用于向用户终端或服务器之间的数据传输。本实施方式中的存储器存储有图像处理方法中执行所有子模块所需的程序代码及数据,服务器能够调用服务器的程序代码及数据执行所有子模块的功能。In this embodiment, the processor is used to execute the specific contents of the acquisition module 2100, the processing module 2200 and the execution module 2300 in FIG4, and the memory stores the program code and various data required to execute the above modules. The network interface is used to transmit data between the user terminal or the server. The memory in this embodiment stores the program code and data required to execute all submodules in the image processing method, and the server can call the program code and data of the server to execute the functions of all submodules.
本发明实施例提供的计算机设备,其中的参考特征图是对参考池中的高清图像集进行特征提取得到的,由于高清图像集中图像的多样化,参考特征图中包含了所有可能用到的局部特征,可以为每一张低分辨率图像提供高频纹理信息不仅保证了特征的丰富性,还可以减轻了内存负担。此外,根据低分辨率图像来搜索参考特征图,选择的参考特征图可以自适应的屏蔽或增强多种不同的特征,使低分辨率图像的细节更加丰富。In the computer device provided by the embodiment of the present invention, the reference feature map is obtained by extracting features from a high-definition image set in a reference pool. Due to the diversity of images in the high-definition image set, the reference feature map contains all possible local features, and can provide high-frequency texture information for each low-resolution image, which not only ensures the richness of features, but also reduces the memory burden. In addition, the reference feature map is searched based on the low-resolution image, and the selected reference feature map can adaptively shield or enhance a variety of different features, making the details of the low-resolution image richer.
本发明还提供一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一实施例所述图像处理方法的步骤。The present invention also provides a storage medium storing computer-readable instructions, and when the computer-readable instructions are executed by one or more processors, the one or more processors execute the steps of the image processing method described in any of the above embodiments.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。Those skilled in the art can understand that all or part of the processes in the above-mentioned embodiments can be implemented by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium. When the program is executed, it can include the processes of the embodiments of the above-mentioned methods. Among them, the aforementioned storage medium can be a non-volatile storage medium such as a disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that, although the steps in the flowchart of the accompanying drawings are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and they can be executed in other orders. Moreover, at least a part of the steps in the flowchart of the accompanying drawings may include multiple sub-steps or multiple stages, and these sub-steps or stages are not necessarily executed at the same time, but can be executed at different times, and their execution order is not necessarily sequential, but can be executed in turn or alternately with other steps or at least a part of the sub-steps or stages of other steps.
以上所述仅是本发明的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above descriptions are only some embodiments of the present invention. It should be pointed out that, for ordinary technicians in this technical field, several improvements and modifications can be made without departing from the principles of the present invention. These improvements and modifications should also be regarded as the scope of protection of the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110889919.0A CN113724276B (en) | 2021-08-04 | 2021-08-04 | Polyp image segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110889919.0A CN113724276B (en) | 2021-08-04 | 2021-08-04 | Polyp image segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113724276A CN113724276A (en) | 2021-11-30 |
CN113724276B true CN113724276B (en) | 2024-05-28 |
Family
ID=78674791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110889919.0A Active CN113724276B (en) | 2021-08-04 | 2021-08-04 | Polyp image segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113724276B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972155B (en) * | 2021-12-30 | 2023-04-07 | 昆明理工大学 | Polyp image segmentation method based on context information and reverse attention |
CN116935051B (en) * | 2023-07-20 | 2024-06-14 | 深圳大学 | Polyp segmentation network method, system, electronic equipment and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105430295A (en) * | 2015-10-30 | 2016-03-23 | 努比亚技术有限公司 | Image processing device and method |
WO2018224442A1 (en) * | 2017-06-05 | 2018-12-13 | Siemens Aktiengesellschaft | Method and apparatus for analysing an image |
CN109934789A (en) * | 2019-03-26 | 2019-06-25 | 湖南国科微电子股份有限公司 | Image de-noising method, device and electronic equipment |
CN110852335A (en) * | 2019-11-19 | 2020-02-28 | 燕山大学 | Target tracking system based on multi-color feature fusion and depth network |
CN111383214A (en) * | 2020-03-10 | 2020-07-07 | 苏州慧维智能医疗科技有限公司 | Real-time endoscope enteroscope polyp detection system |
CN111768425A (en) * | 2020-07-23 | 2020-10-13 | 腾讯科技(深圳)有限公司 | Image processing method, device and equipment |
CN111986204A (en) * | 2020-07-23 | 2020-11-24 | 中山大学 | Polyp segmentation method and device and storage medium |
CN112001861A (en) * | 2020-08-18 | 2020-11-27 | 香港中文大学(深圳) | Image processing method and apparatus, computer device, and storage medium |
CN112330688A (en) * | 2020-11-02 | 2021-02-05 | 腾讯科技(深圳)有限公司 | Image processing method and device based on artificial intelligence and computer equipment |
CN112489061A (en) * | 2020-12-09 | 2021-03-12 | 浙江工业大学 | Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism |
CN112669197A (en) * | 2019-10-16 | 2021-04-16 | 顺丰科技有限公司 | Image processing method, image processing device, mobile terminal and storage medium |
CN112950461A (en) * | 2021-03-27 | 2021-06-11 | 刘文平 | Global and superpixel segmentation fused color migration method |
CN113012150A (en) * | 2021-04-14 | 2021-06-22 | 南京农业大学 | Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111160350B (en) * | 2019-12-23 | 2023-05-16 | Oppo广东移动通信有限公司 | Portrait segmentation method, model training method, device, medium and electronic equipment |
-
2021
- 2021-08-04 CN CN202110889919.0A patent/CN113724276B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105430295A (en) * | 2015-10-30 | 2016-03-23 | 努比亚技术有限公司 | Image processing device and method |
WO2018224442A1 (en) * | 2017-06-05 | 2018-12-13 | Siemens Aktiengesellschaft | Method and apparatus for analysing an image |
CN109934789A (en) * | 2019-03-26 | 2019-06-25 | 湖南国科微电子股份有限公司 | Image de-noising method, device and electronic equipment |
CN112669197A (en) * | 2019-10-16 | 2021-04-16 | 顺丰科技有限公司 | Image processing method, image processing device, mobile terminal and storage medium |
CN110852335A (en) * | 2019-11-19 | 2020-02-28 | 燕山大学 | Target tracking system based on multi-color feature fusion and depth network |
CN111383214A (en) * | 2020-03-10 | 2020-07-07 | 苏州慧维智能医疗科技有限公司 | Real-time endoscope enteroscope polyp detection system |
CN111986204A (en) * | 2020-07-23 | 2020-11-24 | 中山大学 | Polyp segmentation method and device and storage medium |
CN111768425A (en) * | 2020-07-23 | 2020-10-13 | 腾讯科技(深圳)有限公司 | Image processing method, device and equipment |
CN112001861A (en) * | 2020-08-18 | 2020-11-27 | 香港中文大学(深圳) | Image processing method and apparatus, computer device, and storage medium |
CN112330688A (en) * | 2020-11-02 | 2021-02-05 | 腾讯科技(深圳)有限公司 | Image processing method and device based on artificial intelligence and computer equipment |
CN112489061A (en) * | 2020-12-09 | 2021-03-12 | 浙江工业大学 | Deep learning intestinal polyp segmentation method based on multi-scale information and parallel attention mechanism |
CN112950461A (en) * | 2021-03-27 | 2021-06-11 | 刘文平 | Global and superpixel segmentation fused color migration method |
CN113012150A (en) * | 2021-04-14 | 2021-06-22 | 南京农业大学 | Feature-fused high-density rice field unmanned aerial vehicle image rice ear counting method |
Non-Patent Citations (2)
Title |
---|
Automatized colon polyp segmentation via contour region analysis;Alain Sánchez-González;《Computers in Biology and Medicine》;全文 * |
基于无线胶囊内窥镜图像的小肠病变智能检测与识别研究;刘士臣;《中国优秀硕士学位论文全文数据库 信息科技辑》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113724276A (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10861133B1 (en) | Super-resolution video reconstruction method, device, apparatus and computer-readable storage medium | |
GB2587841A (en) | Utilizing a neural network having a two-stream encoder architecture to generate composite digital images | |
CN113724276B (en) | Polyp image segmentation method and device | |
WO2023015755A1 (en) | Matting network training method and matting method | |
WO2023280148A1 (en) | Blood vessel segmentation method and apparatus, and electronic device and readable medium | |
CN114066902A (en) | Medical image segmentation method, system and device based on convolution and transformer fusion | |
WO2021189889A1 (en) | Text detection method and apparatus in scene image, computer device, and storage medium | |
WO2021018199A1 (en) | Convolutional neural network-based image processing method and apparatus | |
CN116309648A (en) | A medical image segmentation model construction method based on multi-attention fusion | |
WO2018214769A1 (en) | Image processing method, device and system | |
WO2021168703A1 (en) | Character processing and identifying methods, storage medium, and terminal device | |
US9460489B2 (en) | Image processing apparatus and image processing method for performing pixel alignment | |
US12112482B2 (en) | Techniques for interactive image segmentation networks | |
US20130182943A1 (en) | Systems and methods for depth map generation | |
US20080158250A1 (en) | Rendering multiple clear rectangles using a pre-rendered depth buffer | |
CN114565768A (en) | Image segmentation method and device | |
Li et al. | Fast 3D texture-less object tracking with geometric contour and local region | |
CN117152063A (en) | A SWI-based cerebral microbleed image detection method, device and processing equipment | |
CN117496204A (en) | An image feature point matching method, device, equipment and medium | |
CN113139463B (en) | Method, apparatus, device, medium and program product for training a model | |
CN115496744A (en) | Lung cancer image segmentation method, device, terminal and medium based on mixed attention | |
WO2023108568A1 (en) | Model training method and apparatus for image processing, and storage medium and electronic device | |
TW202228070A (en) | Computer device and image processing method | |
CN113902001A (en) | Model training method and device, electronic equipment and storage medium | |
CN113343979A (en) | Method, apparatus, device, medium and program product for training a model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |