[go: up one dir, main page]

CN114663421B - Retina image analysis system and method based on information migration and ordered classification - Google Patents

Retina image analysis system and method based on information migration and ordered classification Download PDF

Info

Publication number
CN114663421B
CN114663421B CN202210367584.0A CN202210367584A CN114663421B CN 114663421 B CN114663421 B CN 114663421B CN 202210367584 A CN202210367584 A CN 202210367584A CN 114663421 B CN114663421 B CN 114663421B
Authority
CN
China
Prior art keywords
image
network
bionic
segmentation
fundus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210367584.0A
Other languages
Chinese (zh)
Other versions
CN114663421A (en
Inventor
姚新明
陈洁
张靖
赵腾飞
何俊俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital of Wannan Medical College
Original Assignee
First Affiliated Hospital of Wannan Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital of Wannan Medical College filed Critical First Affiliated Hospital of Wannan Medical College
Priority to CN202210367584.0A priority Critical patent/CN114663421B/en
Publication of CN114663421A publication Critical patent/CN114663421A/en
Application granted granted Critical
Publication of CN114663421B publication Critical patent/CN114663421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

本发明提出一种基于信息迁移和有序分类的视网膜图像智能分析系统,包括:眼底图像预处理模块、基于信息迁移的眼底图像血管分割网络模块、基于有序分类的智能分析预测模块,其中图像预处理模块基于视觉图像增强算法,多算法协同工作有效增强图像质量;基于信息迁移的眼底图像血管分割网络有效地实现视网膜血管分割;结合了有序分类的智能分析预测模块准确预测视网膜状态变化等级;有效快捷的实现视网膜图像的智能分析。

Figure 202210367584

The present invention proposes an intelligent retinal image analysis system based on information migration and orderly classification, including: a fundus image preprocessing module, a fundus image blood vessel segmentation network module based on information migration, and an intelligent analysis and prediction module based on orderly classification, wherein the image The preprocessing module is based on the visual image enhancement algorithm, and multi-algorithms work together to effectively enhance the image quality; the fundus image blood vessel segmentation network based on information migration effectively realizes retinal blood vessel segmentation; the intelligent analysis and prediction module combined with ordered classification accurately predicts the retinal state change level ;Effective and fast intelligent analysis of retinal images.

Figure 202210367584

Description

基于信息迁移和有序分类的视网膜图像分析系统及方法Retinal image analysis system and method based on information migration and ordered classification

技术领域Technical Field

本发明属于人工智能图像处理领域,具体涉及一套基于信息迁移和有序分类的视网膜图像智能分析系统。The present invention belongs to the field of artificial intelligence image processing, and specifically relates to a retinal image intelligent analysis system based on information migration and orderly classification.

背景技术Background Art

眼底图像是通过眼底照相机拍摄到的眼底结构,主要包括视网膜、脉络膜、黄斑和视神经等。视网膜图像采集是指使用采集设备来采集视网膜图像数据的过程,涉及成像设备、成像系统等问题。采集到的视网膜眼底图像根据成像原理的不同通常分为两种:荧光眼底图像和常规源眼底图像。Fundus images are fundus structures captured by a fundus camera, mainly including the retina, choroid, macula, and optic nerve. Retinal image acquisition refers to the process of using acquisition equipment to collect retinal image data, involving issues such as imaging equipment and imaging systems. The acquired retinal fundus images are usually divided into two types according to different imaging principles: fluorescent fundus images and conventional source fundus images.

在采集视网膜图像过程中,读入的视网膜图像出于光线明暗、距离远近、采集图像的角度以及人眼固有相差的局限等原因的影响,通常采集到的视网膜图像有大量噪声,其中部分还包含病灶,图像像素对比度低、局部光照不均、血管分布复杂密集,存在多种纹理背景,图像质量不高。In the process of collecting retinal images, the read-in retinal images are affected by factors such as light brightness, distance, image collection angle, and the inherent phase difference limitations of the human eye. Usually, the collected retinal images have a lot of noise, some of which also contain lesions. The image pixels have low contrast, local illumination is uneven, the blood vessels are complex and densely distributed, there are multiple texture backgrounds, and the image quality is not high.

现有技术的图像处理过程中,一般基于神经网络进行图像处理,由于眼底图像采集成像的质量,以及常规算法普适性不足,导致当前眼底图像视网膜血管分割不准确,继而增加了眼底图像智能分析的实际应用的约束。In the image processing process of the prior art, image processing is generally performed based on neural networks. Due to the quality of fundus image acquisition and the lack of universality of conventional algorithms, the retinal blood vessel segmentation of current fundus images is inaccurate, which in turn increases the constraints on the practical application of intelligent analysis of fundus images.

发明内容Summary of the invention

本发明的目的是针对现有技术问题,本申请提出一种基于信息迁移和有序分类的视网膜图像智能分析系统,进行图像预处理增强,然后基于信息迁移眼底图像分割网络进行视网膜血管分割,最后使用结合有序分类的神经网络进行智能分析预测,以达到视网膜图像智能分析目的。The purpose of the present invention is to address the problems of the prior art. This application proposes a retinal image intelligent analysis system based on information migration and ordered classification, which performs image preprocessing enhancement, then performs retinal vascular segmentation based on the information migration fundus image segmentation network, and finally uses a neural network combined with ordered classification for intelligent analysis and prediction to achieve the purpose of intelligent analysis of retinal images.

本申请技术方案如下:The technical solution of this application is as follows:

一种基于信息迁移和有序分类的视网膜图像智能分析系统,包括眼底图像预处理模块、眼底图像血管分割网络模块和眼底图像智能分析预测模块;A retinal image intelligent analysis system based on information migration and ordered classification, comprising a fundus image preprocessing module, a fundus image blood vessel segmentation network module and a fundus image intelligent analysis and prediction module;

所述眼底图像预处理模块对采集到的眼底图像进行图像预处理,目的是改善图像质量,突出有用特征;The fundus image preprocessing module performs image preprocessing on the collected fundus image, with the purpose of improving image quality and highlighting useful features;

所述眼底图像血管分割网络模块基于信息迁移对预处理后的眼底图像进行血管分割,获得血管图像,所述眼底图像血管分割网络模块中定义引导网络和仿生网络;The fundus image vascular segmentation network module performs vascular segmentation on the preprocessed fundus image based on information migration to obtain a vascular image, wherein a guiding network and a bionic network are defined in the fundus image vascular segmentation network module;

所述眼底图像智能分析预测模块基于有序分类对血管图像进行状态变化预测。The fundus image intelligent analysis and prediction module predicts state changes of the blood vessel image based on ordered classification.

所述眼底图像预处理模块包括图像增强子模块、掩码生成子模块和多尺度线性滤波器子模块;The fundus image preprocessing module includes an image enhancement submodule, a mask generation submodule and a multi-scale linear filter submodule;

所述图像增强子模块对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;图像增强子模块有效改善视网膜图像质量,提高血管对比度,突出特征元素(血管结构、视盘、黄斑);The image enhancement submodule performs dual-tree complex wavelet and improved top-hat transform on the collected original fundus image to obtain an image-enhanced fundus image; the image enhancement submodule effectively improves the quality of retinal images, increases vascular contrast, and highlights characteristic elements (vascular structure, optic disc, macula);

双树复小波变换(DT-CWT)克服了常用的离散小波变换的缺陷,当对应小波基近似满足Hilbert变换关系时,双树复小波变换能够减小通常的实小波变换中的平移敏感性,改善方向选择性,在有效提升图像质量的同时,保留细节信息;原始眼底图像进行双树复小波变换后,对复小波变换后的眼底图像进行改进后的top-hat变换;The dual-tree complex wavelet transform (DT-CWT) overcomes the defects of the commonly used discrete wavelet transform. When the corresponding wavelet basis approximately satisfies the Hilbert transform relationship, the dual-tree complex wavelet transform can reduce the translation sensitivity in the usual real wavelet transform, improve the directional selectivity, and retain the detail information while effectively improving the image quality. After the original fundus image is subjected to the dual-tree complex wavelet transform, the fundus image after the complex wavelet transform is subjected to the improved top-hat transform.

改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果;The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an opening operation fundus image, when the transformed fundus image is subtracted from the opening operation fundus image, the data with the transformed grayscale value remains unchanged, and the unchanged data is the subtraction result;

本申请改进后的top-hat变换是在复小波变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果,经过改进的top-hat变换,图像灰度差值明显增大,一些幅度变化小的边缘也可以得到有效的保护;The improved top-hat transformation of the present application is that when the fundus image after complex wavelet transformation is subtracted from the fundus image after opening operation, the data with changed grayscale value remains unchanged, and the unchanged data is the subtraction result. After the improved top-hat transformation, the image grayscale difference is significantly increased, and some edges with small amplitude changes can also be effectively protected;

所述掩码生成子模块基于空间亮度信息进行视场提取,把图像增强后的眼底图像(彩色图像,经过复小波变换和top-hat变换进行局部细节增强和图像质量改善之后的图像)从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,实现眼底区域和背景区域的分离,得到掩膜图像,用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像,包括以下步骤:The mask generation submodule extracts the field of view based on the spatial brightness information, converts the image-enhanced fundus image (a color image, an image after local detail enhancement and image quality improvement through complex wavelet transform and top-hat transform) from the RGB format to the YIQ format, sets a segmentation threshold, extracts the surrounding black field of view, obtains the useful information area through the corrosion operation, separates the fundus area from the background area, obtains the mask image, and multiplies the mask image and the image-enhanced fundus image to obtain the fundus area image, including the following steps:

第一步:将图像增强后的眼底图像从RGB格式转换到YIQ格式:Step 1: Convert the enhanced fundus image from RGB format to YIQ format:

Figure GDA0004137946400000031
Figure GDA0004137946400000031

上式(1)求得YIQ格式的眼底图像的3个分量;The above formula (1) obtains the three components of the fundus image in YIQ format;

第二步:设置分割阈值,提取周围黑色视场,通过腐蚀操作得到感兴趣区域;Step 2: Set the segmentation threshold, extract the surrounding black field of view, and obtain the region of interest through corrosion operation;

获得掩膜图像为:The mask image is obtained as:

Figure GDA0004137946400000032
Figure GDA0004137946400000032

其中,“1”代表背景边框图,“0”代表眼球血管;Y为图像的亮度信息,等于图像亮度分量的灰度值(YIQ格式的眼底图像的Y分量),M(x′,y)为提取的背景边框,x′,y表示像素坐标;Among them, "1" represents the background border image, and "0" represents the blood vessels of the eyeball; Y is the brightness information of the image, which is equal to the grayscale value of the brightness component of the image (the Y component of the fundus image in YIQ format), M(x′,y) is the extracted background border, and x′,y represent the pixel coordinates;

所述多尺度线性滤波器子模块基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,进一步突出血管特征,获得滤波后血管特征图像。The multiscale linear filter submodule is based on the multiscale linear filter of the Hessian matrix, sets parameters according to the grayscale values and characteristic values of the blood vessels in the fundus area image, eliminates noise after filtering, further highlights the blood vessel features, and obtains a filtered blood vessel feature image.

眼底图像血管分割网络模块训练过程包括以下步骤:The training process of the fundus image blood vessel segmentation network module includes the following steps:

步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔A次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重;Step 201, using the filtered vascular feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the vascular segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every A iterations, saving the weights with segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network;

步骤202,仿生网络迭代训练的过程中,引导网络加载201中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in 201, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the loss function as a constraint condition during the fitting process;

步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,并跳转至步骤202进行迭代训练,每隔A次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重。Step 203, update the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction of decreasing the loss function value through the back propagation method, and jump to step 202 for iterative training. Perform a segmentation accuracy test on the validation set every A iterations, save the weights with accuracy greater than the set segmentation threshold, and after the iterative training is completed, select the weight with the highest segmentation accuracy as the final optimal weight of the bionic network.

步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈R m×n′ , G∈R m×n′ is generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, the output feature map of the encoder layer is F 1 ∈R h×w×m , where h, w, m are the height, width and number of channels of the F 1 feature map, and the output feature map of the decoder layer is F 2 ∈R h×w×n′ , where h, w, n′ are the height, width and number of channels of the F 2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈R m×n′ are calculated as:

Figure GDA0004137946400000041
Figure GDA0004137946400000041

其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列。Wherein, s=1, ..., h, t=1, ..., w, x and W represent the weights of the input image and the guidance/bionic network respectively; Ga,b (x; W) represents the ath row and bth column of the guidance codec matrix or the bionic codec matrix.

步骤202中通过多尺度残差相似度收集模块提取残差相似度,所述多尺度残差相似度收集模块采用相似体积收集上下文信息,具体包括以下步骤:In step 202, residual similarity is extracted by a multi-scale residual similarity collection module, and the multi-scale residual similarity collection module uses a similar volume to collect context information, specifically including the following steps:

对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is:

Pj′=Pj×Pcenter(4)P j ′=P j ×P center (4)

其中j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个特征向量的残差相似度

Figure GDA0004137946400000051
其中d代表自定义区域的大小,H和W′分别表示特征向量的高和宽,获取相应的残差相似度
Figure GDA0004137946400000052
然后将残差相似度
Figure GDA0004137946400000053
相加求和,第i个特征向量的最终残差相似度为
Figure GDA0004137946400000054
Where j represents the coordinate of the d×d region. For each pixel in the vascular feature image after image filtering, a local representation is obtained, and then the local representations are connected along the channel dimension to obtain the residual similarity of the i-th feature vector
Figure GDA0004137946400000051
Where d represents the size of the custom region, H and W′ represent the height and width of the feature vector respectively, and the corresponding residual similarity is obtained.
Figure GDA0004137946400000052
Then the residual similarity
Figure GDA0004137946400000053
Add and sum, the final residual similarity of the i-th eigenvector is
Figure GDA0004137946400000054

引导网络生成的引导编解码矩阵为

Figure GDA0004137946400000055
引导残差相似度为
Figure GDA0004137946400000056
仿生网络生成的仿生编解码矩阵
Figure GDA0004137946400000057
仿生网络生成的仿生残差相似度为和
Figure GDA0004137946400000058
i=1,...,n;所述信息迁移任务的损失函数为:The guided encoding and decoding matrix generated by the guided network is
Figure GDA0004137946400000055
The guided residual similarity is
Figure GDA0004137946400000056
Bionic codec matrix generated by bionic network
Figure GDA0004137946400000057
The similarity of the bionic residual generated by the bionic network is and
Figure GDA0004137946400000058
i=1,...,n; the loss function of the information migration task is:

Figure GDA0004137946400000059
Figure GDA0004137946400000059

其中,Wt为引导网络权重,Ws仿生网络权重,Gi T代表引导网络第i个特征向量的编解码矩阵,Gi S代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的个数;λi和βi代表对应损失项的权重因子,N代表数据点的数量。 Among them, Wt is the guided network weight , Ws is the bionic network weight, GiT represents the encoding and decoding matrix of the i-th eigenvector of the guided network, GiS represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors; λi and βi represent the weight factors of the corresponding loss terms, and N represents the number of data points.

所述基于有序分类的眼底图像智能分析预测模块对分割后的血管图像进行状态变化预测,将有序的多分类问题依次分割为若干个二分类问题;The fundus image intelligent analysis and prediction module based on ordered classification predicts the state change of the segmented vascular image, and sequentially divides the ordered multi-classification problem into a number of binary classification problems;

有序分类的损失函数为:The loss function for ordered classification is:

Figure GDA00041379464000000510
Figure GDA00041379464000000510

(7)(7)

其中N代表数据点的数量,T代表二元分类任务,γt代表第t个二元分类任务的权重,

Figure GDA0004137946400000061
代表第c个样本相对于第t个二元分类任务的输出,
Figure GDA0004137946400000062
代表第c个样本的第t个二元分类任务的真实标签,
Figure GDA0004137946400000063
代表第t个二元分类任务中第c个样本的权重,Wt表示第t个二元分类任务分类器的参数,xc代表第c个输入向量,
Figure GDA0004137946400000064
表示概率模型。Where N represents the number of data points, T represents the binary classification task, and γt represents the weight of the t-th binary classification task.
Figure GDA0004137946400000061
represents the output of the c-th sample relative to the t-th binary classification task,
Figure GDA0004137946400000062
represents the true label of the t-th binary classification task for the c-th sample,
Figure GDA0004137946400000063
represents the weight of the cth sample in the tth binary classification task, Wt represents the parameters of the tth binary classification task classifier, xc represents the cth input vector,
Figure GDA0004137946400000064
Represents a probability model.

一种基于信息迁移和有序分类的视网膜图像智能分析方法,包括以下步骤:A retinal image intelligent analysis method based on information migration and ordered classification comprises the following steps:

步骤1,对采集到的眼底图像进行图像预处理,目的是改善图像质量,突出有用特征;Step 1: Preprocess the acquired fundus images to improve image quality and highlight useful features;

步骤2,基于信息迁移进行眼底图像血管分割;对预处理后的眼底图像进行血管分割,获得血管图像;Step 2, performing blood vessel segmentation on the fundus image based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image;

步骤3,基于有序分类对血管图像进行状态变化预测,本实施例准确实现眼底图像病变的分期预测。Step 3: predicting the state change of the vascular image based on the ordered classification. This embodiment accurately predicts the stage of fundus image lesions.

步骤1具体包括以下步骤:Step 1 specifically includes the following steps:

S101,对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;步骤S101有效改善视网膜图像质量,提高血管对比度,突出特征元素(血管结构、视盘、黄斑);S101, performing dual-tree complex wavelet and improved top-hat transform on the collected original fundus image to obtain an enhanced fundus image; step S101 effectively improves the retinal image quality, increases the vascular contrast, and highlights the characteristic elements (vascular structure, optic disc, macula);

改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an opening operation fundus image; when the transformed fundus image is subtracted from the opening operation fundus image, the data whose grayscale value is transformed remains unchanged, and the unchanged data is the subtraction result.

S102,基于空间亮度信息进行视场提取,把图像增强后的眼底图像从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,实现眼底区域和背景区域的分离,得到掩膜图像,用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像;S102, extracting the field of view based on the spatial brightness information, converting the image-enhanced fundus image from the RGB format to the YIQ format, setting a segmentation threshold, extracting the surrounding black field of view, obtaining the useful information area through an erosion operation, separating the fundus area from the background area, obtaining a mask image, and multiplying the mask image and the image-enhanced fundus image to obtain a fundus area image;

S103,基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,进一步突出血管特征,获得滤波后血管特征图像。S103, a multi-scale linear filter based on the Hessian matrix is used to set parameters according to the grayscale values and characteristic values of the blood vessels in the fundus area image, and the noise is eliminated after filtering to further highlight the blood vessel features and obtain a filtered blood vessel feature image.

步骤2具体包括以下步骤:Step 2 specifically includes the following steps:

步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔A次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重;Step 201, using the filtered vascular feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the vascular segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every A iterations, saving the weights with segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network;

步骤202,仿生网络迭代训练的过程中,引导网络加载步骤201中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in step 201, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the loss function as a constraint condition during the fitting process;

步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,并跳转至步骤202进行迭代训练,每隔A次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重;Step 203, updating the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction where the loss function value becomes smaller through the back propagation method, and jumping to step 202 for iterative training, performing a segmentation accuracy test on the validation set every A iterations, saving the weights with accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network;

步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈R m×n′ , G∈R m×n′ is generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, the output feature map of the encoder layer is F 1 ∈R h×w×m , where h, w, m are the height, width and number of channels of the F 1 feature map, and the output feature map of the decoder layer is F 2 ∈R h×w×n′ , where h, w, n′ are the height, width and number of channels of the F 2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈R m×n′ are calculated as:

Figure GDA0004137946400000081
Figure GDA0004137946400000081

其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列;Wherein, s=1, ..., h, t=1, ..., w, x and W represent the weights of the input image and the guidance/bionic network respectively; Ga,b (x; W) represents the ath row and bth column of the guidance codec matrix or the bionic codec matrix;

步骤202中通过多尺度残差相似度(MRS)收集模块提取残差相似度,所述多尺度残差相似度(MRS)收集模块采用相似体积收集上下文信息,具体包括以下步骤:In step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collection module, and the multi-scale residual similarity (MRS) collection module uses a similarity volume to collect context information, specifically including the following steps:

对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is:

Pj′=Pj×Pcenter(4)P j ′=P j ×P center (4)

其中j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个特征向量的残差相似度

Figure GDA0004137946400000082
其中d代表自定义区域的大小,H和W′分别表示特征向量的高和宽,分别获取相应的残差相似度
Figure GDA0004137946400000083
将残差相似度
Figure GDA0004137946400000084
相加求和,第i个特征向量的最终残差相似度为
Figure GDA0004137946400000085
Where j represents the coordinate of the d×d region. For each pixel in the vascular feature image after image filtering, a local representation is obtained, and then the local representations are connected along the channel dimension to obtain the residual similarity of the i-th feature vector
Figure GDA0004137946400000082
Where d represents the size of the custom region, H and W′ represent the height and width of the feature vector, respectively, and the corresponding residual similarity is obtained.
Figure GDA0004137946400000083
The residual similarity
Figure GDA0004137946400000084
Add and sum, the final residual similarity of the i-th eigenvector is
Figure GDA0004137946400000085

引导网络生成的引导编解码矩阵为

Figure GDA0004137946400000086
引导残差相似度为
Figure GDA0004137946400000087
仿生网络生成的仿生编解码矩阵
Figure GDA0004137946400000088
仿生网络生成的仿生残差相似度为和
Figure GDA0004137946400000089
信息迁移任务的损失函数为:The guided encoding and decoding matrix generated by the guided network is
Figure GDA0004137946400000086
The guided residual similarity is
Figure GDA0004137946400000087
Bionic codec matrix generated by bionic network
Figure GDA0004137946400000088
The similarity of the bionic residual generated by the bionic network is and
Figure GDA0004137946400000089
The loss function of the information transfer task is:

Figure GDA00041379464000000810
Figure GDA00041379464000000810

其中,Wt为引导网络权重,Ws仿生网络权重,Gi T代表引导网络第i个特征向量的编解码矩阵,Gi S代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的的个数(编解码矩阵个数);λi和βi代表第i个特征向量对应损失项的权重因子,N代表数据点的数量;Where Wt is the guide network weight, Ws is the bionic network weight, GiT represents the encoding and decoding matrix of the i-th eigenvector of the guide network, GiS represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors (number of encoding and decoding matrices); λi and βi represent the weight factors of the loss term corresponding to the i-th eigenvector, and N represents the number of data points;

步骤S3对于分割后的血管图像进行状态变化预测,将有序的多分类问题依次分割为若干个二分类问题;Step S3 predicts the state change of the segmented blood vessel image, and sequentially divides the ordered multi-classification problem into several binary classification problems;

有序分类的损失函数为:The loss function for ordered classification is:

Figure GDA0004137946400000091
Figure GDA0004137946400000091

(7)其中N代表数据点的数量,T代表二元分类任务,γt代表第t个二元分类任务的权重,

Figure GDA0004137946400000092
代表第c个样本相对于第t个二元分类任务的输出,
Figure GDA0004137946400000093
代表第c个样本的第t个二元分类任务的真实标签,
Figure GDA0004137946400000094
代表第t个二元分类任务中第c个样本的权重,Wt表示第t个二元分类任务,xc代表第c个输入向量,
Figure GDA0004137946400000095
表示概率模型最终的分类结果依据每一个二分类任务的分类结果来进行整合判断。(7) where N represents the number of data points, T represents the binary classification task, and γt represents the weight of the t-th binary classification task.
Figure GDA0004137946400000092
represents the output of the c-th sample relative to the t-th binary classification task,
Figure GDA0004137946400000093
represents the true label of the t-th binary classification task for the c-th sample,
Figure GDA0004137946400000094
represents the weight of the cth sample in the tth binary classification task, Wt represents the tth binary classification task, xc represents the cth input vector,
Figure GDA0004137946400000095
It indicates that the final classification result of the probability model is integrated and judged based on the classification results of each binary classification task.

相对于现有技术,本发明具有如下有益效果:Compared with the prior art, the present invention has the following beneficial effects:

本发明公开一种基于信息迁移和有序分类的视网膜图像智能分析系统进行图像预处理增强,然后基于信息迁移眼底图像分割网络进行视网膜血管分割,最后使用结合有序分类的神经网络进行智能分析预测,实现视网膜图像的智能分析。The present invention discloses a retinal image intelligent analysis system based on information migration and ordered classification for image preprocessing enhancement, then performs retinal blood vessel segmentation based on an information migration fundus image segmentation network, and finally uses a neural network combined with ordered classification for intelligent analysis and prediction to achieve intelligent analysis of retinal images.

本发明图像预处理模块的视觉图像增强算法,多算法协同工作有效增强图像质量;基于信息迁移的眼底图像血管分割网络有效地实现视网膜血管分割;结合了有序分类的智能分析预测模块准确预测,有效快捷的实现视网膜图像的智能分析。The visual image enhancement algorithm of the image preprocessing module of the present invention uses multiple algorithms to work together to effectively enhance image quality; the fundus image vascular segmentation network based on information migration effectively realizes retinal vascular segmentation; the intelligent analysis and prediction module combined with ordered classification accurately predicts and effectively and quickly realizes intelligent analysis of retinal images.

本申请图像眼底图像预处理模块包括图像增强子模块、掩码生成子模块和多尺度线性滤波器子模块,提高血管对比度,突出特征元素,克服了常用的离散小波变换的缺陷,在有效提升图像质量的同时,保留细节信息;图像灰度差值明显增大,一些幅度变化小的边缘得到有效的保护。The fundus image preprocessing module of the present application includes an image enhancement submodule, a mask generation submodule and a multi-scale linear filter submodule, which improves the contrast of blood vessels, highlights characteristic elements, overcomes the defects of the commonly used discrete wavelet transform, and retains detail information while effectively improving the image quality; the image grayscale difference is significantly increased, and some edges with small amplitude changes are effectively protected.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present application will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:

图1是本发明基于信息迁移和有序分类的视网膜图像智能分析系统的整体系统架构示意图;FIG1 is a schematic diagram of the overall system architecture of the retinal image intelligent analysis system based on information migration and orderly classification of the present invention;

图2是本发明中基于信息迁移的眼底图像血管分割网络模块的整体结构示意图;FIG2 is a schematic diagram of the overall structure of a fundus image blood vessel segmentation network module based on information migration in the present invention;

图3是本实施例掩膜图像。FIG. 3 is a mask image of this embodiment.

具体实施方式DETAILED DESCRIPTION

下面将结合本发明中的附图,对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动条件下所获得的所有其它实施例,都属于本发明保护的范围。The technical solution of the present invention will be described clearly and completely below in conjunction with the accompanying drawings of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

本发明的核心思想是根据眼底图像的自身特点,设计合适算法进行图像预处理增强,然后基于信息迁移眼底图像分割网络进行视网膜血管分割,最后使用结合有序分类的神经网络进行智能分析预测,以达到视网膜图像的智能分析的目的。The core idea of the present invention is to design a suitable algorithm for image preprocessing and enhancement according to the characteristics of the fundus image itself, and then perform retinal vascular segmentation based on the information migration fundus image segmentation network, and finally use a neural network combined with ordered classification for intelligent analysis and prediction to achieve the purpose of intelligent analysis of retinal images.

本发明基于信息迁移和有序分类的视网膜图像智能分析系统的整体流程,如图1所示,一种基于信息迁移和有序分类的视网膜图像智能分析系统,包括眼底图像预处理模块、眼底图像血管分割网络模块、眼底图像智能分析预测模块,本实施例还包括前台展示模块;The overall process of the retinal image intelligent analysis system based on information migration and orderly classification of the present invention is shown in Figure 1. A retinal image intelligent analysis system based on information migration and orderly classification includes a fundus image preprocessing module, a fundus image blood vessel segmentation network module, and a fundus image intelligent analysis prediction module. This embodiment also includes a front-end display module;

眼底图像预处理模块对采集到的眼底图像进行图像预处理,目的是改善图像质量,突出有用特征;The fundus image preprocessing module performs image preprocessing on the collected fundus images, aiming to improve the image quality and highlight the useful features;

眼底图像血管分割网络模块基于信息迁移对预处理后的眼底图像进行血管分割,获得血管图像,所述眼底图像血管分割网络模块中定义了两个网络:引导网络和仿生网络,所述眼底图像血管分割网络模块定义引导网络的重要信息,并将提取的引导网络的知识信息转移到仿生网络中;The fundus image vascular segmentation network module performs vascular segmentation on the preprocessed fundus image based on information migration to obtain a vascular image. The fundus image vascular segmentation network module defines two networks: a guide network and a bionic network. The fundus image vascular segmentation network module defines important information of the guide network and transfers the knowledge information extracted from the guide network to the bionic network.

眼底图像智能分析预测模块基于有序分类对血管图像进行状态变化预测,本实施例中,糖尿病基于疾病病变恶化是个有序的过程,由轻度病变到中度再到严重病变,准确实现眼底图像状态变化的分期预测;The fundus image intelligent analysis and prediction module predicts the state change of the vascular image based on the ordered classification. In this embodiment, the progression of diabetes is an ordered process, from mild lesions to moderate and then to severe lesions, and the staged prediction of the fundus image state change is accurately achieved;

前台展示模块,从数据库中读取相应数据,向用户展示各算法子模块的处理结果并与用户进行交互。The front-end display module reads the corresponding data from the database, displays the processing results of each algorithm sub-module to the user and interacts with the user.

本实施例中,首先进行数据集采集工作,选取2020年1月-2022年12月确诊的门诊或住院的2型糖尿病患者1000例,经过排除和常规检查后,由统一的经专门培训的糖尿病专科护士,使用免扩瞳眼底照相机对采集对象进行眼底照相,获得唯一的眼底图像;In this embodiment, data set collection is first performed, and 1,000 outpatient or inpatient type 2 diabetes patients diagnosed from January 2020 to December 2022 are selected. After exclusion and routine examination, a unified specially trained diabetes specialist nurse uses a non-mydriasis fundus camera to take fundus photos of the subjects and obtain a unique fundus image.

所述眼底图像预处理模块包括图像增强子模块、掩码生成子模块和多尺度线性滤波器子模块;The fundus image preprocessing module includes an image enhancement submodule, a mask generation submodule and a multi-scale linear filter submodule;

所述图像增强子模块对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;图像增强子模块有效改善视网膜图像质量,提高血管对比度,突出特征元素(血管结构、视盘、黄斑);The image enhancement submodule performs dual-tree complex wavelet and improved top-hat transform on the collected original fundus image to obtain an image-enhanced fundus image; the image enhancement submodule effectively improves the quality of retinal images, increases vascular contrast, and highlights characteristic elements (vascular structure, optic disc, macula);

双树复小波变换(DT-CWT)是为克服了常用的离散小波变换的缺陷,当对应小波基近似满足Hilbert变换关系时,双树复小波变换能够减小通常的实小波变换中的平移敏感性,改善方向选择性,在有效提升图像质量的同时,保留细节信息;原始眼底图像进行双树复小波变换后,对复小波变换后的眼底图像进行改进后的top-hat变换;The dual-tree complex wavelet transform (DT-CWT) is designed to overcome the defects of the commonly used discrete wavelet transform. When the corresponding wavelet basis approximately satisfies the Hilbert transform relationship, the dual-tree complex wavelet transform can reduce the translation sensitivity in the usual real wavelet transform, improve the directional selectivity, and retain the detail information while effectively improving the image quality. After the original fundus image is subjected to the dual-tree complex wavelet transform, the fundus image after the complex wavelet transform is subjected to the improved top-hat transform.

改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an opening operation fundus image; when the transformed fundus image is subtracted from the opening operation fundus image, the data whose grayscale value is transformed remains unchanged, and the unchanged data is the subtraction result.

形态学top-hat变换是通过开运算和闭运算组合进行,开运算和闭运算是由膨胀运算和腐蚀运算递推得来的。传统top-hat变换是通过原始图像减去其作开运算后的结果作为最终结果,但这样会使得最终结果整体偏暗,一些较暗的边缘则不能显示。本申请改进后的top-hat变换是在复小波变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果,经过改进的top-hat变换,图像灰度差值明显增大,一些幅度变化小的边缘也可以得到有效的保护;The morphological top-hat transformation is performed by combining opening and closing operations, which are derived from dilation and erosion operations. The traditional top-hat transformation takes the original image minus the result of the opening operation as the final result, but this will make the final result darker overall, and some darker edges cannot be displayed. The improved top-hat transformation of the present application is that when the fundus image after the complex wavelet transformation is subtracted from the fundus image after the opening operation, the data whose grayscale value has been transformed remains unchanged, and the unchanged data is the subtraction result. After the improved top-hat transformation, the image grayscale difference is significantly increased, and some edges with small amplitude changes can also be effectively protected;

经过双树复小波变换和改进的top-hat变换后,获得局部细节增强和图像质量改善之后的眼底图像;After dual-tree complex wavelet transform and improved top-hat transform, the fundus image with local detail enhancement and image quality improvement is obtained;

所述掩码生成子模块基于空间亮度信息进行视场提取,把图像增强后的眼底图像(彩色图像,经过复小波变换和top-hat变换进行局部细节增强和图像质量改善之后的图像)从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,实现眼底区域和背景区域的分离,得到掩膜图像(如图3所示),用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像。The mask generation submodule performs field extraction based on spatial brightness information, converts the image-enhanced fundus image (a color image, an image after local detail enhancement and image quality improvement through complex wavelet transform and top-hat transform) from RGB format to YIQ format, sets a segmentation threshold, extracts the surrounding black field of view, obtains the useful information area through corrosion operation, separates the fundus area from the background area, obtains a mask image (as shown in FIG. 3 ), and multiplies the mask image and the image-enhanced fundus image to obtain a fundus area image.

图像增强后的眼底图像分析需要获取ROI(感兴趣区域),也就是眼底区域图像,这样在后续的处理中可以有效避免ROI区域外的像素的影响,降低运算复杂度,掩膜图像可以的界定ROI和无用的区域;After image enhancement, fundus image analysis needs to obtain ROI (region of interest), that is, fundus area image, so that the influence of pixels outside the ROI area can be effectively avoided in subsequent processing, reducing the complexity of calculation. The mask image can define the ROI and useless areas.

掩膜图像通常形象的称之为“视场”,利用原始眼底图像生成相同大小的掩膜图像,进行眼底区域和背景区域的分离。为了准确提取掩膜图像(眼底图像视场),本发明提出基于空间亮度信息的视场提取方法。基于空间亮度信息的视场提取方法在YIQ格式的图像基础上,分离亮度信息和色度信息,再通过分割阈值的选取提取周围黑色区域。不同于RGB三色通道,在YIQ格式中,Y指图像的亮度信息,I指橙色到青色的颜色变化,Q指紫色到黄绿色的颜色变化。由于它包含亮度和颜色的双重信息,将图像中的亮度分量分离提取出来,包括以下步骤:The mask image is usually figuratively referred to as the "field of view". The original fundus image is used to generate a mask image of the same size to separate the fundus area and the background area. In order to accurately extract the mask image (fundus image field of view), the present invention proposes a field of view extraction method based on spatial brightness information. The field of view extraction method based on spatial brightness information separates brightness information and chromaticity information on the basis of an image in YIQ format, and then extracts the surrounding black area by selecting a segmentation threshold. Different from the RGB three-color channels, in the YIQ format, Y refers to the brightness information of the image, I refers to the color change from orange to cyan, and Q refers to the color change from purple to yellow-green. Since it contains dual information of brightness and color, the brightness component in the image is separated and extracted, including the following steps:

第一步:将图像增强后的眼底图像从RGB格式转换到YIQ格式:Step 1: Convert the enhanced fundus image from RGB format to YIQ format:

Figure GDA0004137946400000131
Figure GDA0004137946400000131

上式(1)可求得YIQ格式的图像的3个分量;The above formula (1) can obtain the three components of the image in YIQ format;

第二步:设置分割阈值,提取周围黑色视场,通过腐蚀操作得到感兴趣区域;Step 2: Set the segmentation threshold, extract the surrounding black field of view, and obtain the region of interest through corrosion operation;

分割阈值的选择是分割提取的基础,对于单一分割阈值,通过分割阈值把像素的灰度值分为两类,一类是灰度值大于分割阈值的,另一类是灰度小于分割阈值的。Y分量是指图像的亮度信息,YIQ格式的眼底图像的Y分量直方图是一个双峰状况,包括两部分高像素,两端的像素值为0,依据Y分量直方图的特点进行分离;The selection of segmentation threshold is the basis of segmentation extraction. For a single segmentation threshold, the grayscale value of the pixel is divided into two categories by the segmentation threshold, one is the grayscale value greater than the segmentation threshold, and the other is the grayscale less than the segmentation threshold. The Y component refers to the brightness information of the image. The Y component histogram of the fundus image in YIQ format is a bimodal state, including two parts of high pixels, and the pixel values at both ends are 0. Separation is performed based on the characteristics of the Y component histogram;

经过一系列试验,本实施例设置分割阈值为50进行黑色区域的提取分离,获得掩膜图像:After a series of experiments, this embodiment sets the segmentation threshold to 50 to extract and separate the black area to obtain a mask image:

Figure GDA0004137946400000132
Figure GDA0004137946400000132

其中,“1”代表背景边框图,“0”代表眼球血管;Y为图像的亮度信息,等于图像亮度分量的灰度值(YIQ格式的眼底图像的Y分量),M(x′,y)为提取的背景边框,x′,y表示像素坐标,如果第x′行y列像素的Y分量大于50,认为该点为背景区域;Among them, "1" represents the background border image, and "0" represents the eyeball blood vessels; Y is the brightness information of the image, which is equal to the grayscale value of the image brightness component (the Y component of the fundus image in YIQ format), M(x′, y) is the extracted background border, x′, y represent the pixel coordinates, if the Y component of the pixel in the x′th row and yth column is greater than 50, the point is considered to be the background area;

所述多尺度线性滤波器子模块基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,进一步突出血管特征,获得滤波后血管特征图像;The multiscale linear filter submodule is based on the multiscale linear filter of the Hessian matrix, and sets parameters according to the grayscale value and characteristic value of the blood vessels in the fundus area image, eliminates noise after filtering, further highlights the blood vessel features, and obtains a filtered blood vessel feature image;

Hessian矩阵实现对图像中的特定结构(点结构/线结构)进行增强,从而实现对目标特征的提取,同时剔除其他无用的噪声信息;The Hessian matrix can enhance specific structures (point structures/line structures) in the image, thereby extracting target features and eliminating other useless noise information.

在二维图像中,Hessian矩阵是二维正定矩阵,有两个特征值和对应的特征向量。两个特征值表示出了图像在两个特征向量所指方向上图像变化的各向异性,如果利用特征向量和特征值构成一个椭圆,那么这个椭圆就标注除了图像变化的各向异性。图像中的点性结构具有各项同性,而线性结构具有各向异性。因此利用Hessian矩阵特性构造滤波器对眼底区域图像中的线性结构进行增强,也就是血管结构,同时滤去点状的结构,即噪声点。In a two-dimensional image, the Hessian matrix is a two-dimensional positive definite matrix with two eigenvalues and corresponding eigenvectors. The two eigenvalues represent the anisotropy of the image change in the direction indicated by the two eigenvectors. If an ellipse is formed using the eigenvectors and eigenvalues, then the ellipse marks the anisotropy of the image change. The point structure in the image is isotropic, while the linear structure is anisotropic. Therefore, the Hessian matrix characteristics are used to construct a filter to enhance the linear structure in the fundus area image, that is, the vascular structure, while filtering out the point structure, that is, the noise point.

如图2所示,眼底图像血管分割网络模块对预处理后的眼底图像滤波后血管特征图像基于信息迁移的方式进行血管分割工作,眼底图像血管分割网络模块基于信息迁移进行血管分割,眼底图像血管分割网络模块包括两个分割网络:引导网络和仿生网络。信息迁移指的就是将预训练好的引导网络模型提取的知识信息迁移至仿生网络。眼底图像血管分割网络模块训练过程包括以下步骤:As shown in Figure 2, the fundus image vascular segmentation network module performs vascular segmentation on the vascular feature image after preprocessing and filtering of the fundus image based on information migration. The fundus image vascular segmentation network module performs vascular segmentation based on information migration. The fundus image vascular segmentation network module includes two segmentation networks: a guidance network and a bionic network. Information migration refers to the transfer of knowledge information extracted from the pre-trained guidance network model to the bionic network. The training process of the fundus image vascular segmentation network module includes the following steps:

步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔100次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重(保存最优权重是在迭代训练的过程中,网络每隔100次迭代,就会在验证集上进行测试,分割准确率最高的那次就称之为最优权重);Step 201, using the filtered blood vessel feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the blood vessel segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every 100 iterations, saving the weights with a segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network (saving the optimal weight means that during the iterative training process, the network will be tested on the validation set every 100 iterations, and the one with the highest segmentation accuracy is called the optimal weight);

步骤202,仿生网络迭代训练的过程中,引导网络加载A10中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用L2损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in A10, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the L2 loss function as a constraint condition during the fitting process;

步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,相应参数(参数包括仿生网络的编解码矩阵参数、残差相似度参数和分割网络参数,),并跳转至202进行迭代训练,每隔100次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重。Step 203, update the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction where the loss function value becomes smaller through the back propagation method, and jump to 202 for iterative training. Every 100 iterations, perform a segmentation accuracy test on the validation set, save the weights with an accuracy greater than the set segmentation threshold, and after the iterative training is completed, select the weight with the highest segmentation accuracy as the final optimal weight of the bionic network.

神经网络编码解码的过程中(分割网络整体解决问题的方案就是一个先编码再解码过程),信息的流动包含问题解决的方法,将网络解决问题的方式当作知识信息转移给仿生网络能够使其得到更好的泛化性。In the process of neural network encoding and decoding (the solution to the overall problem of segmentation network is a process of encoding and then decoding), the flow of information includes the method of solving the problem. Transferring the network's problem-solving method as knowledge information to the bionic network can enable it to achieve better generalization.

步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′(m*n′矩阵,G矩阵由两侧特征图生成)通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,(引导网络该层的特征图图像的高、宽和通道数),解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈R m×n′ , G∈R m×n′ (m*n′ matrix, G matrix is generated by the feature maps on both sides) generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, and the output feature map of the encoder layer is F 1 ∈R h×w×m , where h, w, m are the height, width and number of channels of the F 1 feature map (the height, width and number of channels of the feature map image of this layer of the guided network), and the output feature map of the decoder layer is F 2 ∈R h×w×n′ , where h, w, n′ represent the height, width and number of channels of the F 2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈R m×n′ are calculated as follows:

Figure GDA0004137946400000151
Figure GDA0004137946400000151

其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络(引导网络或者仿生网络)的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列(编解码矩阵是一个m*n′的高纬度矩阵,第a行第b列)。Wherein, s=1, ..., h, t=1, ..., w, x and W represent the weights of the input image and the guided/bionic network (guided network or bionic network), respectively; Ga,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the bionic codec matrix (the codec matrix is an m*n′ high-dimensional matrix with a-th row and b-th column).

步骤202中通过多尺度残差相似度(MRS)收集模块提取残差相似度,多尺度残差相似度(MRS)收集模块用于收集捕捉局部区域的特征信息,所述多尺度残差相似度(MRS)收集模块采用相似体积收集上下文信息,具体包括以下步骤:In step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collection module, and the multi-scale residual similarity (MRS) collection module is used to collect feature information of the captured local area. The multi-scale residual similarity (MRS) collection module uses a similar volume to collect context information, and specifically includes the following steps:

对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is:

Pj′=Pj×Pcenter(4)P j ′=P j ×P center (4)

其中j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个特征向量的残差相似度

Figure GDA0004137946400000161
其中d代表自定义区域的大小,H和W′分别表示特征向量的高和宽,中心像素周围像素点的重要性随着距离衰减而降低,因此,所述多尺度残差相似度收集模块,就是设定不同值的d,分别获取相应的残差相似度
Figure GDA0004137946400000162
然后将残差相似度
Figure GDA0004137946400000163
相加求和,第i个特征向量的最终残差相似度为
Figure GDA0004137946400000164
然后选取不同的d值(d=3、5、7)将对应的残差相似度
Figure GDA0004137946400000165
(d=3,5,7,不同的d值大小体现多尺度)相加求和,得到第i个特征向量的最终残差相似度为
Figure GDA0004137946400000166
(前述的
Figure GDA0004137946400000167
是输入的特征向量的残差相似度,
Figure GDA0004137946400000168
是输出的特征向量的残差相似度)。Where j represents the coordinate of the d×d region. For each pixel in the vascular feature image after image filtering, a local representation is obtained, and then the local representations are connected along the channel dimension to obtain the residual similarity of the i-th feature vector
Figure GDA0004137946400000161
Where d represents the size of the custom area, H and W′ represent the height and width of the feature vector respectively. The importance of the pixels around the central pixel decreases with the distance attenuation. Therefore, the multi-scale residual similarity collection module sets different values of d to obtain the corresponding residual similarities.
Figure GDA0004137946400000162
Then the residual similarity
Figure GDA0004137946400000163
Add and sum, the final residual similarity of the i-th eigenvector is
Figure GDA0004137946400000164
Then select different d values (d = 3, 5, 7) to convert the corresponding residual similarity
Figure GDA0004137946400000165
(d = 3, 5, 7, different d values reflect multi-scale) add and sum, and the final residual similarity of the i-th eigenvector is obtained as
Figure GDA0004137946400000166
(The aforementioned
Figure GDA0004137946400000167
is the residual similarity of the input feature vector,
Figure GDA0004137946400000168
is the residual similarity of the output feature vector).

引导网络生成的引导编解码矩阵为

Figure GDA0004137946400000169
引导残差相似度为
Figure GDA00041379464000001610
仿生网络生成的仿生编解码矩阵
Figure GDA00041379464000001611
仿生网络生成的仿生残差相似度为和
Figure GDA00041379464000001612
信息迁移使得仿生编解码矩阵和仿生残差相似度分别趋近于引导编解码矩阵和引导残差相似度。所述信息迁移任务的损失函数为:The guided encoding and decoding matrix generated by the guided network is
Figure GDA0004137946400000169
The guided residual similarity is
Figure GDA00041379464000001610
Bionic codec matrix generated by bionic network
Figure GDA00041379464000001611
The similarity of the bionic residual generated by the bionic network is and
Figure GDA00041379464000001612
Information migration makes the bionic codec matrix and bionic residual similarity approach the guided codec matrix and guided residual similarity respectively. The loss function of the information migration task is:

Figure GDA0004137946400000171
Figure GDA0004137946400000171

其中,Wt为引导网络权重,Ws仿生网络权重,Gi T代表引导网络第i个特征向量的编解码矩阵,Gi S代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的个数;λi和βi代表对应损失项的权重因子,N代表数据点的数量。 Among them, Wt is the guided network weight , Ws is the bionic network weight, GiT represents the encoding and decoding matrix of the i-th eigenvector of the guided network, GiS represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors; λi and βi represent the weight factors of the corresponding loss terms, and N represents the number of data points.

所述基于有序分类的眼底图像智能分析预测模块,对于分割后的血管图像进行状态变化预测,值得说明的是,本实施例是以糖尿病病人的眼底图像为样本进行的实验,本申请适用于所有眼底图像的图像处理;The fundus image intelligent analysis and prediction module based on ordered classification predicts the state change of the segmented vascular image. It is worth noting that this embodiment is an experiment conducted with fundus images of diabetic patients as samples, and this application is applicable to image processing of all fundus images;

有序多分类基本原理是,因变量之间存在顺序关系,将有序的多分类问题依次分割为多个简单的二分类问题。本实施例所研究的糖尿病视网膜病变的分期问题中,根据状态变化的严重程度,总共分为5期。正常(0期)、轻度NPDR(1期)、中度NPDR(2期)、严重NPDR(3期)、PDR(4期)。在利用有序分类进行分类时,这个问题被拆分成4个二元的分类问题,分别是(0 vs 1+2+3+4)、(0+1 vs 2+3+4)、(0+1+2 vs 3+4)和(0+1+2+3 vs 4)。The basic principle of ordered multi-classification is that there is an order relationship between dependent variables, and the ordered multi-classification problem is divided into multiple simple binary classification problems in turn. In the staging problem of diabetic retinopathy studied in the present embodiment, according to the severity of the state change, it is divided into 5 stages in total. Normal (stage 0), mild NPDR (stage 1), moderate NPDR (stage 2), severe NPDR (stage 3), PDR (stage 4). When using ordered classification for classification, this problem is split into 4 binary classification problems, namely (0 vs 1+2+3+4), (0+1 vs 2+3+4), (0+1+2 vs 3+4) and (0+1+2+3 vs 4).

本实施例针对糖尿病视网膜病变分期问题,如果用传统卷积神经网络来解决时,这是一个多分类问题,通过多层卷积、池化、全连接层之后,最终用一个softmax回归就能完成分类任务。然而,这样做并没有考虑到病变分期本身具有的特性,网络结构主要参考GoogLeNet,将网络最后一层设置为有序分类,其损失函数如下所示:This embodiment aims at the problem of diabetic retinopathy staging. If the traditional convolutional neural network is used to solve it, it is a multi-classification problem. After multiple layers of convolution, pooling, and full connection layers, a softmax regression can be used to complete the classification task. However, this does not take into account the characteristics of the lesion staging itself. The network structure mainly refers to GoogLeNet, and the last layer of the network is set to ordered classification. Its loss function is as follows:

Figure GDA0004137946400000172
Figure GDA0004137946400000172

其中N代表数据点的数量,T代表二元分类任务,γt代表每一个二分类任务的权重,

Figure GDA0004137946400000181
代表第c个样本相对于第t个任务的输出,
Figure GDA0004137946400000182
代表真实标签。
Figure GDA0004137946400000183
代表第t个二元分类任务中每个样本的权重,后续随着网络的训练在优化。wt表示第t个分类器所具有的参数,xc代表第c个输入向量、
Figure GDA0004137946400000184
表示概率模型。对于每一个二分类任务,我们得到的输出结果都为0或者1。最终的分类结果会依据每一个二分类任务的分类结果来进行整合判断。具体在本任务中,将4个二分类任务的分类结果进行求和,即可得到最终的疾病的分期预测.Where N represents the number of data points, T represents the binary classification task, and γt represents the weight of each binary classification task.
Figure GDA0004137946400000181
represents the output of the cth sample relative to the tth task,
Figure GDA0004137946400000182
Represents the true label.
Figure GDA0004137946400000183
represents the weight of each sample in the t-th binary classification task, which is subsequently optimized as the network is trained. w t represents the parameters of the t-th classifier, xc represents the c-th input vector,
Figure GDA0004137946400000184
Represents a probability model. For each binary classification task, the output result we get is 0 or 1. The final classification result will be integrated and judged based on the classification results of each binary classification task. Specifically in this task, the classification results of the four binary classification tasks are summed up to get the final disease stage prediction.

所述前台用户界面模块向用户展示各阶段相应结果,训练完成后的系统可以实现眼底图像智能分析。The front-end user interface module displays the corresponding results of each stage to the user, and the system after training can realize intelligent analysis of fundus images.

如图1所示,一种基于信息迁移和有序分类的视网膜图像智能分析方法,包括以下步骤:As shown in FIG1 , a retinal image intelligent analysis method based on information migration and ordered classification includes the following steps:

步骤1,对采集到的眼底图像进行图像预处理,目的是改善图像质量,突出有用特征;Step 1: Preprocess the acquired fundus images to improve image quality and highlight useful features;

步骤2,基于信息迁移进行眼底图像血管分割;对预处理后的眼底图像进行血管分割,获得血管图像;Step 2, performing blood vessel segmentation on the fundus image based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image;

步骤3,基于有序分类对血管图像进行状态变化预测。Step 3: predict the state change of the vascular image based on the ordered classification.

步骤1具体包括以下步骤:Step 1 specifically includes the following steps:

S101,对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;步骤S101有效改善视网膜图像质量,提高血管对比度,突出特征元素(血管结构、视盘、黄斑);S101, performing dual-tree complex wavelet and improved top-hat transform on the collected original fundus image to obtain an enhanced fundus image; step S101 effectively improves the retinal image quality, increases the vascular contrast, and highlights the characteristic elements (vascular structure, optic disc, macula);

双树复小波变换(DT-CWT)是为克服了常用的离散小波变换的缺陷,当对应小波基近似满足Hilbert变换关系时,双树复小波变换能够减小通常的实小波变换中的平移敏感性,改善方向选择性,在有效提升图像质量的同时,保留细节信息;原始眼底图像进行双树复小波变换后,对复小波变换后的眼底图像进行改进后的top-hat变换;The dual-tree complex wavelet transform (DT-CWT) is designed to overcome the defects of the commonly used discrete wavelet transform. When the corresponding wavelet basis approximately satisfies the Hilbert transform relationship, the dual-tree complex wavelet transform can reduce the translation sensitivity in the usual real wavelet transform, improve the directional selectivity, and retain the detail information while effectively improving the image quality. After the original fundus image is subjected to the dual-tree complex wavelet transform, the fundus image after the complex wavelet transform is subjected to the improved top-hat transform.

改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果;The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an opening operation fundus image, when the transformed fundus image is subtracted from the opening operation fundus image, the data with the transformed grayscale value remains unchanged, and the unchanged data is the subtraction result;

形态学top-hat变换是通过开运算和闭运算组合进行,开运算和闭运算是由膨胀运算和腐蚀运算递推得来的。传统top-hat变换是通过原始图像减去其作开运算后的结果作为最终结果,但这样会使得最终结果整体偏暗,一些较暗的边缘则不能显示。本申请改进后的top-hat变换是在复小波变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果,经过改进的top-hat变换,图像灰度差值明显增大,一些幅度变化小的边缘也可以得到有效的保护;The morphological top-hat transformation is performed by combining opening and closing operations, which are derived from dilation and erosion operations. The traditional top-hat transformation takes the original image minus the result of the opening operation as the final result, but this will make the final result darker overall, and some darker edges cannot be displayed. The improved top-hat transformation of the present application is that when the fundus image after the complex wavelet transformation is subtracted from the fundus image after the opening operation, the data whose grayscale value has been transformed remains unchanged, and the unchanged data is the subtraction result. After the improved top-hat transformation, the image grayscale difference is significantly increased, and some edges with small amplitude changes can also be effectively protected;

经过双树复小波变换和改进的top-hat变换后,获得局部细节增强和图像质量改善之后的眼底图像;After dual-tree complex wavelet transform and improved top-hat transform, the fundus image with local detail enhancement and image quality improvement is obtained;

S102,基于空间亮度信息进行视场提取,把图像增强后的眼底图像(彩色图像,经过复小波变换和top-hat变换进行局部细节增强和图像质量改善之后的图像)从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,实现眼底区域和背景区域的分离,得到掩膜图像,用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像。S102, extracting the field of view based on the spatial brightness information, converting the image-enhanced fundus image (a color image, an image after local detail enhancement and image quality improvement through complex wavelet transform and top-hat transform) from RGB format to YIQ format, setting a segmentation threshold, extracting the surrounding black field of view, obtaining the useful information area through corrosion operation, separating the fundus area and the background area, obtaining a mask image, and multiplying the mask image and the image-enhanced fundus image to obtain a fundus area image.

图像增强后的眼底图像分析需要获取ROI(感兴趣区域),也就是眼底区域图像,这样在后续的处理中可以有效避免ROI区域外的像素的影响,降低运算复杂度,掩膜图像可以的界定ROI和无用的区域;After image enhancement, fundus image analysis needs to obtain ROI (region of interest), that is, fundus area image, so that the influence of pixels outside the ROI area can be effectively avoided in subsequent processing, reducing the complexity of calculation. The mask image can define the ROI and useless areas.

掩膜图像通常形象的称之为“视场”,利用原始眼底图像生成相同大小的掩膜图像,进行眼底区域和背景区域的分离。为了准确提取掩膜图像(眼底图像视场),本发明提出基于空间亮度信息的视场提取方法。基于空间亮度信息的视场提取方法在YIQ格式的图像基础上,分离亮度信息和色度信息,再通过分割阈值的选取提取周围黑色区域。不同于RGB三色通道,在YIQ格式中,Y指图像的亮度信息,I指橙色到青色的颜色变化,Q指紫色到黄绿色的颜色变化。由于它包含亮度和颜色的双重信息,将图像中的亮度分量分离提取出来;The mask image is usually figuratively referred to as the "field of view". The original fundus image is used to generate a mask image of the same size to separate the fundus area and the background area. In order to accurately extract the mask image (fundus image field of view), the present invention proposes a field of view extraction method based on spatial brightness information. The field of view extraction method based on spatial brightness information separates brightness information and chromaticity information on the basis of an image in YIQ format, and then extracts the surrounding black area by selecting a segmentation threshold. Different from the RGB three-color channels, in the YIQ format, Y refers to the brightness information of the image, I refers to the color change from orange to cyan, and Q refers to the color change from purple to yellow-green. Because it contains dual information of brightness and color, the brightness component in the image is separated and extracted;

掩膜图像提取具体包括以下步骤:Mask image extraction specifically includes the following steps:

第一步:将图像增强后的眼底图像从RGB格式转换到YIQ格式:Step 1: Convert the enhanced fundus image from RGB format to YIQ format:

Figure GDA0004137946400000201
Figure GDA0004137946400000201

上式(1)可求得YIQ格式的图像的3个分量;The above formula (1) can obtain the three components of the image in YIQ format;

第二步:设置分割阈值,提取周围黑色视场,通过腐蚀操作得到感兴趣区域;Step 2: Set the segmentation threshold, extract the surrounding black field of view, and obtain the region of interest through corrosion operation;

分割阈值的选择是分割提取的基础,对于单一分割阈值,通过分割阈值把像素的灰度值分为两类,一类是灰度值大于分割阈值的,另一类是灰度小于分割阈值的。Y分量是指图像的亮度信息,YIQ格式的眼底图像的Y分量直方图是一个双峰状况,包括两部分高像素,两端的像素值为0,依据Y分量直方图的特点进行分离;The selection of segmentation threshold is the basis of segmentation extraction. For a single segmentation threshold, the grayscale value of the pixel is divided into two categories by the segmentation threshold, one is the grayscale value greater than the segmentation threshold, and the other is the grayscale less than the segmentation threshold. The Y component refers to the brightness information of the image. The Y component histogram of the fundus image in YIQ format is a bimodal state, including two parts of high pixels, and the pixel values at both ends are 0. Separation is performed based on the characteristics of the Y component histogram;

经过一系列试验,本实施例设置分割阈值为50进行黑色区域的提取分离,获得掩膜图像:After a series of experiments, this embodiment sets the segmentation threshold to 50 to extract and separate the black area to obtain a mask image:

Figure GDA0004137946400000202
Figure GDA0004137946400000202

其中,“1”代表背景边框图,“0”代表眼球血管;Y为图像的亮度信息,等于图像亮度分量的灰度值(YIQ格式的眼底图像的Y分量),M(x′,y)为提取的背景边框,x′,y表示像素坐标,如果第x’行y列像素的Y分量大于50,认为该点为背景区域;Among them, "1" represents the background border image, and "0" represents the blood vessels of the eyeball; Y is the brightness information of the image, which is equal to the grayscale value of the image brightness component (the Y component of the fundus image in YIQ format), M(x',y) is the extracted background border, x',y represent the pixel coordinates, if the Y component of the pixel in the x'th row and yth column is greater than 50, the point is considered to be the background area;

S103,基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,进一步突出血管特征,获得滤波后血管特征图像;S103, a multi-scale linear filter based on the Hessian matrix, setting parameters according to the grayscale values and characteristic values of the blood vessels in the fundus area image, eliminating noise after filtering, further highlighting the blood vessel features, and obtaining a filtered blood vessel feature image;

Hessian矩阵实现对图像中的特定结构(点结构/线结构)进行增强,从而实现对目标特征的提取,同时剔除其他无用的噪声信息;The Hessian matrix can enhance specific structures (point structures/line structures) in the image, thereby extracting target features and eliminating other useless noise information.

在二维图像中,Hessian矩阵是二维正定矩阵,有两个特征值和对应的特征向量。两个特征值表示出了图像在两个特征向量所指方向上图像变化的各向异性,如果利用特征向量和特征值构成一个椭圆,那么这个椭圆就标注除了图像变化的各向异性。图像中的点性结构具有各项同性,而线性结构具有各向异性。因此利用Hessian矩阵特性构造滤波器对眼底区域图像中的线性结构进行增强,也就是血管结构,同时滤去点状的结构,即噪声点。In a two-dimensional image, the Hessian matrix is a two-dimensional positive definite matrix with two eigenvalues and corresponding eigenvectors. The two eigenvalues represent the anisotropy of the image change in the direction indicated by the two eigenvectors. If an ellipse is formed using the eigenvectors and eigenvalues, then the ellipse marks the anisotropy of the image change. The point structure in the image is isotropic, while the linear structure is anisotropic. Therefore, the Hessian matrix characteristics are used to construct a filter to enhance the linear structure in the fundus area image, that is, the vascular structure, while filtering out the point structure, that is, the noise point.

步骤2对预处理后的眼底图像滤波后血管特征图像基于信息迁移进行血管分割,眼底图像血管分割网络模块包括两个分割网络:引导网络和仿生网络。信息迁移指的就是将预训练好的引导网络模型提取的知识信息迁移至仿生网络。步骤2包括以下步骤:Step 2 performs blood vessel segmentation on the filtered blood vessel feature image of the preprocessed fundus image based on information migration. The fundus image blood vessel segmentation network module includes two segmentation networks: a guided network and a bionic network. Information migration refers to the migration of knowledge information extracted from the pre-trained guided network model to the bionic network. Step 2 includes the following steps:

步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔100次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重(保存最优权重是在迭代训练的过程中,网络每隔100次迭代,就会在验证集上进行测试,分割准确率最高的那次就称之为最优权重);Step 201, using the filtered blood vessel feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the blood vessel segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every 100 iterations, saving the weights with a segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network (saving the optimal weight means that during the iterative training process, the network will be tested on the validation set every 100 iterations, and the one with the highest segmentation accuracy is called the optimal weight);

整体数据集会被划分成训练集,验证集和测试集,引导网络和仿生网络使用同一数据集;The overall data set will be divided into training set, validation set and test set, and the guiding network and bionic network use the same data set;

步骤202,仿生网络迭代训练的过程中,引导网络加载步骤201中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用L2损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in step 201, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the L2 loss function as a constraint condition during the fitting process;

步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,相应参数(参数包括仿生网络的编解码矩阵参数、残差相似度参数和分割网络参数),并跳转至步骤202进行迭代训练,每隔100次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重。Step 203, update the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction where the loss function value becomes smaller through the back propagation method, and corresponding parameters (parameters include the encoding and decoding matrix parameters, residual similarity parameters and segmentation network parameters of the bionic network), and jump to step 202 for iterative training. Every 100 iterations, perform a segmentation accuracy test on the validation set, save the weights with an accuracy greater than the set segmentation threshold, and after the iterative training is completed, select the weight with the highest segmentation accuracy as the final optimal weight of the bionic network.

步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′(m*n′矩阵,G矩阵由两侧特征图生成)通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,(引导网络该层的特征图图像的高、宽和通道数),解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈Rm×n′, G∈Rm×n′ (m*n′ matrix, G matrix is generated by the feature maps on both sides) are generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, and the output feature map of the encoder layer is F1∈Rh × w ×m, where h, w, m are the height, width and number of channels of the F1 feature map (the height, width and number of channels of the feature map image of this layer of the guided network), and the output feature map of the decoder layer is F2∈Rh × w ×n′, where h, w, n′ are the height, width and number of channels of the F2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈Rm×n′ are calculated as follows:

Figure GDA0004137946400000221
Figure GDA0004137946400000221

其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络(引导网络或者仿生网络)的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列(编解码矩阵是一个m*n′的高纬度矩阵,第a行第b列)。Wherein, s=1, ..., h, t=1, ..., w, x and W represent the weights of the input image and the guided/bionic network (guided network or bionic network), respectively; Ga,b (x; W) represents the a-th row and b-th column of the guided codec matrix or the bionic codec matrix (the codec matrix is an m*n′ high-dimensional matrix with a-th row and b-th column).

步骤202中通过多尺度残差相似度(MRS)收集模块提取残差相似度,多尺度残差相似度(MRS)收集模块用于收集捕捉局部区域的特征信息,所述多尺度残差相似度(MRS)收集模块采用相似体积收集上下文信息,具体包括以下步骤:In step 202, residual similarity is extracted by a multi-scale residual similarity (MRS) collection module, and the multi-scale residual similarity (MRS) collection module is used to collect feature information of the captured local area. The multi-scale residual similarity (MRS) collection module uses a similar volume to collect context information, and specifically includes the following steps:

对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is:

Pj′=Pj×Pcenter(4)P j ′=P j ×P center (4)

其中j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个输入的特征向量的残差相似度

Figure GDA0004137946400000231
其中d代表自定义区域的大小,H和W′,分别表示特征向量的高和宽,中心像素周围像素点的重要性随着距离衰减而降低,因此,所述多尺度残差相似度收集模块,选取不同的d值(d=3、5、7)将对应的残差相似度
Figure GDA0004137946400000232
(d=3,5,7,不同的d值大小体现多尺度)相加求和,得到第i个特征向量的最终残差相似度为
Figure GDA0004137946400000233
(前述的
Figure GDA0004137946400000234
是输入的特征向量的残差相似度,
Figure GDA0004137946400000235
是输出的特征向量的残差相似度)。Where j represents the coordinate of the d×d region. For each pixel in the vascular feature image after image filtering, a local representation is obtained, and then the local representations are connected along the channel dimension to obtain the residual similarity of the feature vector of the i-th input
Figure GDA0004137946400000231
Where d represents the size of the custom region, H and W′ represent the height and width of the feature vector, respectively. The importance of the pixels around the central pixel decreases with the distance decay. Therefore, the multi-scale residual similarity collection module selects different d values (d = 3, 5, 7) to convert the corresponding residual similarities
Figure GDA0004137946400000232
(d = 3, 5, 7, different d values reflect multi-scale) add and sum, and the final residual similarity of the i-th eigenvector is obtained as
Figure GDA0004137946400000233
(The aforementioned
Figure GDA0004137946400000234
is the residual similarity of the input feature vector,
Figure GDA0004137946400000235
is the residual similarity of the output feature vector).

引导网络生成的引导编解码矩阵为

Figure GDA0004137946400000236
引导残差相似度为
Figure GDA0004137946400000237
仿生网络生成的仿生编解码矩阵
Figure GDA0004137946400000238
仿生网络生成的仿生残差相似度为和
Figure GDA0004137946400000239
信息迁移使得仿生编解码矩阵和仿生残差相似度分别趋近于引导编解码矩阵和引导残差相似度。所述信息迁移任务的损失函数为:The guided encoding and decoding matrix generated by the guided network is
Figure GDA0004137946400000236
The guided residual similarity is
Figure GDA0004137946400000237
Bionic codec matrix generated by bionic network
Figure GDA0004137946400000238
The similarity of the bionic residual generated by the bionic network is and
Figure GDA0004137946400000239
Information migration makes the bionic codec matrix and bionic residual similarity approach the guided codec matrix and guided residual similarity respectively. The loss function of the information migration task is:

Figure GDA00041379464000002310
Figure GDA00041379464000002310

其中,Wt为引导网络权重,Ws仿生网络权重,Gi T代表引导网络第i个特征向量的编解码矩阵,Gi s代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的个数;λi和βi代表对应损失项的权重因子,N代表数据点的数量。 Among them, Wt is the guided network weight, Ws is the bionic network weight, GiT represents the encoding and decoding matrix of the i-th eigenvector of the guided network, Gis represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors; λi and βi represent the weight factors of the corresponding loss terms, and N represents the number of data points.

步骤S3对于分割后的血管图像进行状态变化预测,将有序的多分类问题依次分割为若干个二分类问题;Step S3 predicts the state change of the segmented blood vessel image, and sequentially divides the ordered multi-classification problem into several binary classification problems;

具体的,对于M(M代表需要将状态变化输出预测为M类,具体实施例中,M=5)类分类问题,在利用有序分类进行分类时,这个问题被拆分成M-1个二元的分类问题,分别是(0vs 1+2+3+…+M-1)、(0+1 vs 2+3+…+M-1)、…和(0+1+2+3…+M-2 vs M-1)。根据变量之间存在的特性,在眼底图像智能分析预测的网络结构(网络指的是有序分类的眼底图像智能分析是一个分类网络)中引入有序分类,有序分类的损失函数为:Specifically, for M (M represents the need to predict the state change output as M categories, in the specific embodiment, M = 5) classification problems, when using ordered classification for classification, this problem is split into M-1 binary classification problems, namely (0 vs 1 + 2 + 3 + ... + M-1), (0 + 1 vs 2 + 3 + ... + M-1), ... and (0 + 1 + 2 + 3 ... + M-2 vs M-1). According to the characteristics between the variables, ordered classification is introduced in the network structure of fundus image intelligent analysis prediction (the network refers to the ordered classification of fundus image intelligent analysis is a classification network), and the loss function of ordered classification is:

Figure GDA0004137946400000241
Figure GDA0004137946400000241

其中N代表数据点的数量,T代表二元分类任务,γt代表第t个二元分类任务的权重,

Figure GDA0004137946400000242
代表第c个样本相对于第t个二元分类任务的输出,
Figure GDA0004137946400000243
代表第c个样本的第t个二元分类任务的真实标签,
Figure GDA0004137946400000244
代表第t个二元分类任务中第c个样本的权重,随着网络的训练在优化。Wt表示第t个二元分类任务分类器的参数(分类器的参数,具体包含网络层数,滤波器大小),xc代表第c个输入向量,
Figure GDA0004137946400000245
表示概率模型。最终的分类结果依据每一个二分类任务的分类结果来进行整合判断。Where N represents the number of data points, T represents the binary classification task, and γt represents the weight of the t-th binary classification task.
Figure GDA0004137946400000242
represents the output of the c-th sample relative to the t-th binary classification task,
Figure GDA0004137946400000243
represents the true label of the t-th binary classification task for the c-th sample,
Figure GDA0004137946400000244
represents the weight of the cth sample in the tth binary classification task, which is optimized as the network is trained. Wt represents the parameters of the classifier for the tth binary classification task (the parameters of the classifier, including the number of network layers and filter size), xc represents the cth input vector,
Figure GDA0004137946400000245
Represents a probability model. The final classification result is integrated and judged based on the classification results of each binary classification task.

至此,已经结合附图所示的优选实施方式描述了本发明的技术方案,但是,本领域技术人员容易理解的是,本发明的保护范围显然不局限于这些具体实施方式。在不偏离本发明的原理的前提下,本领域技术人员可以对相关技术特征作出等同的更改或替换,这些更改或替换之后的技术方案都将落入本发明的保护范围之内。So far, the technical solutions of the present invention have been described in conjunction with the preferred embodiments shown in the accompanying drawings. However, it is easy for those skilled in the art to understand that the protection scope of the present invention is obviously not limited to these specific embodiments. Without departing from the principle of the present invention, those skilled in the art can make equivalent changes or substitutions to the relevant technical features, and the technical solutions after these changes or substitutions will fall within the protection scope of the present invention.

在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, a large number of specific details are described. However, it is understood that embodiments of the present invention can be practiced without these specific details. In some instances, well-known methods, structures and techniques are not shown in detail so as not to obscure the understanding of this description.

类似地,应当理解,为了精简本公开并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多特征。更确切地说,如权利要求书所反映的那样,发明方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, it should be understood that in order to streamline the present disclosure and aid in understanding one or more of the various inventive aspects, in the above description of exemplary embodiments of the present invention, various features of the present invention are sometimes grouped together into a single embodiment, figure, or description thereof. However, this disclosed method should not be interpreted as reflecting the intention that the claimed invention requires more features than those expressly recited in each claim. Rather, as reflected in the claims, inventive aspects lie in less than all of the features of the individual embodiments previously disclosed. Therefore, the claims that follow the detailed description are hereby expressly incorporated into the detailed description, with each claim itself serving as a separate embodiment of the present invention.

本领域那些技术人员应当理解在本文所公开的示例中的设备的模块或单元或组间可以布置在如该实施例中所描述的设备中,或者可替换地可以定位在与该示例中的设备不同的一个或多个设备中。前述示例中的模块可以组合为一个模块或者此外可以分成多个子模块。Those skilled in the art will appreciate that the modules or units or groups of the devices in the examples disclosed herein may be arranged in the device as described in the embodiment, or alternatively may be located in one or more devices different from the devices in the example. The modules in the foregoing examples may be combined into one module or may be divided into multiple submodules.

本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组间组合成一个模块或单元或组间,以及此外可以把它们分成多个子模块或子单元或子组间。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。Those skilled in the art will appreciate that the modules in the devices in the embodiments may be adaptively changed and arranged in one or more devices different from the embodiments. The modules or units or groups in the embodiments may be combined into one module or unit or group, and in addition they may be divided into a plurality of submodules or subunits or subgroups. All features disclosed in this specification (including the accompanying claims, abstracts and drawings) and all processes or units of any method or device so disclosed may be combined in any combination, except that at least some of such features and/or processes or units are mutually exclusive. Unless otherwise expressly stated, each feature disclosed in this specification (including the accompanying claims, abstracts and drawings) may be replaced by an alternative feature providing the same, equivalent or similar purpose.

此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。In addition, those skilled in the art will appreciate that, although some embodiments described herein include certain features included in other embodiments but not other features, the combination of features of different embodiments is meant to be within the scope of the present invention and form different embodiments. For example, in the claims below, any one of the claimed embodiments may be used in any combination.

此外,所述实施例中的一些在此被描述成可以由计算机系统的处理器或者由执行所述功能的其它装置实施的方法或方法元素的组合。因此,具有用于实施所述方法或方法元素的必要指令的处理器形成用于实施该方法或方法元素的装置。此外,装置实施例的在此所述的元素是如下装置的例子:该装置用于实施由为了实施该发明的目的的元素所执行的功能。In addition, some of the embodiments are described herein as methods or combinations of method elements that can be implemented by a processor of a computer system or by other devices that perform the functions. Therefore, a processor with necessary instructions for implementing the method or method elements forms a device for implementing the method or method elements. In addition, the elements described herein of the device embodiments are examples of devices for implementing the functions performed by the elements for the purpose of implementing the invention.

这里描述的各种技术可结合硬件或软件,或者它们的组合一起实现。从而,本发明的方法和设备,或者本发明的方法和设备的某些方面或部分可采取嵌入有形媒介,例如软盘、CD-ROM、硬盘驱动器或者其它任意机器可读的存储介质中的程序代码(即指令)的形式,其中当程序被载入诸如计算机之类的机器,并被所述机器执行时,所述机器变成实践本发明的设备。The various techniques described herein may be implemented in combination with hardware or software, or a combination thereof. Thus, the method and apparatus of the present invention, or some aspects or portions of the method and apparatus of the present invention may be in the form of program codes (i.e., instructions) embedded in a tangible medium, such as a floppy disk, a CD-ROM, a hard disk drive, or any other machine-readable storage medium, wherein when the program is loaded into a machine such as a computer and executed by the machine, the machine becomes a device for practicing the present invention.

在程序代码在可编程计算机上执行的情况下,计算设备一般包括处理器、处理器可读的存储介质(包括易失性和非易失性存储器和/或存储元件),至少一个输入装置,和至少一个输出装置。其中,存储器被配置用于存储程序代码;处理器被配置用于根据该存储器中存储的所述程序代码中的指令,执行本发明的方法。In the case where the program code is executed on a programmable computer, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. The memory is configured to store the program code; the processor is configured to execute the method of the present invention according to the instructions in the program code stored in the memory.

以示例而非限制的方式,计算机可读介质包括计算机存储介质和通信介质。计算机可读介质包括计算机存储介质和通信介质。计算机存储介质存储诸如计算机可读指令、数据结构、程序模块或其它数据等信息。通信介质一般以诸如载波或其它传输机制等已调制数据信号来体现计算机可读指令、数据结构、程序模块或其它数据,并且包括任何信息传递介质。以上的任一种的组合也包括在计算机可读介质的范围之内。By way of example and not limitation, computer readable media include computer storage media and communication media. Computer readable media include computer storage media and communication media. Computer storage media stores information such as computer readable instructions, data structures, program modules or other data. Communication media generally embody computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and include any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.

如在此所使用的那样,除非另行规定,使用序数词“第一”、“第二”、“第三”等等来描述普通对象仅仅表示涉及类似对象的不同实例,并且并不意图暗示这样被描述的对象必须具有时间上、空间上、排序方面或者以任意其它方式的给定顺序。As used herein, unless otherwise specified, the use of ordinal numbers "first," "second," "third," etc. to describe common objects merely indicates that different instances of similar objects are involved, and is not intended to imply that the objects so described must have a given order in time, space, order, or in any other manner.

尽管根据有限数量的实施例描述了本发明,但是受益于上面的描述,本技术领域内的技术人员明白,在由此描述的本发明的范围内,可以设想其它实施例。此外,应当注意,本说明书中使用的语言主要是为了可读性和教导的目的而选择的,而不是为了解释或者限定本发明的主题而选择的。因此,在不偏离所附权利要求书的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。对于本发明的范围,对本发明所做的公开是说明性的,而非限制性的,本发明的范围由所附权利要求书限定。Although the present invention has been described according to a limited number of embodiments, it will be apparent to those skilled in the art, with the benefit of the above description, that other embodiments may be envisioned within the scope of the invention thus described. In addition, it should be noted that the language used in this specification is selected primarily for readability and didactic purposes, rather than for explaining or defining the subject matter of the present invention. Therefore, many modifications and variations will be apparent to those skilled in the art without departing from the scope and spirit of the appended claims. The disclosure of the present invention is illustrative, not restrictive, with respect to the scope of the present invention, which is defined by the appended claims.

Claims (2)

1.一种基于信息迁移和有序分类的视网膜图像智能分析系统,其特征在于,包括眼底图像预处理模块、眼底图像血管分割网络模块和眼底图像智能分析预测模块;1. A retinal image intelligent analysis system based on information migration and ordered classification, characterized by comprising a fundus image preprocessing module, a fundus image blood vessel segmentation network module and a fundus image intelligent analysis and prediction module; 所述眼底图像预处理模块对采集到的眼底图像进行图像预处理;The fundus image preprocessing module performs image preprocessing on the collected fundus image; 所述眼底图像血管分割网络模块基于信息迁移对预处理后的眼底图像进行血管分割,获得血管图像,所述眼底图像血管分割网络模块中定义引导网络和仿生网络;The fundus image vascular segmentation network module performs vascular segmentation on the preprocessed fundus image based on information migration to obtain a vascular image, wherein a guiding network and a bionic network are defined in the fundus image vascular segmentation network module; 所述眼底图像智能分析预测模块基于有序分类对血管图像进行状态变化预测;The fundus image intelligent analysis and prediction module predicts the state change of the blood vessel image based on the ordered classification; 所述眼底图像预处理模块包括图像增强子模块、掩码生成子模块和多尺度线性滤波器子模块;The fundus image preprocessing module includes an image enhancement submodule, a mask generation submodule and a multi-scale linear filter submodule; 所述图像增强子模块对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;The image enhancement submodule performs dual-tree complex wavelet and improved top-hat transformation on the collected original fundus image to obtain an image-enhanced fundus image; 改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,双树复小波变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果;The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an open operation fundus image, when the fundus image after the dual-tree complex wavelet transform is subtracted from the open operation fundus image, the data with the changed gray value remains unchanged, and the unchanged data is the subtraction result; 所述掩码生成子模块基于空间亮度信息进行视场提取,把图像增强后的眼底图像从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,进行眼底区域和背景区域的分离,得到掩膜图像,用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像,具体包括以下步骤:The mask generation submodule extracts the field of view based on the spatial brightness information, converts the image-enhanced fundus image from the RGB format to the YIQ format, sets the segmentation threshold, extracts the surrounding black field of view, obtains the useful information area through the corrosion operation, separates the fundus area from the background area, obtains the mask image, and multiplies the mask image and the image-enhanced fundus image to obtain the fundus area image, specifically including the following steps: 第一步:将图像增强后的眼底图像从RGB格式转换到YIQ格式:Step 1: Convert the enhanced fundus image from RGB format to YIQ format:
Figure QLYQS_1
Figure QLYQS_1
式(1)求得YIQ格式的眼底图像的3个分量;Formula (1) obtains the three components of the fundus image in YIQ format; 第二步:设置分割阈值,提取周围黑色视场,通过腐蚀操作得到感兴趣区域;Step 2: Set the segmentation threshold, extract the surrounding black field of view, and obtain the region of interest through corrosion operation; 获得掩膜图像为:The mask image is obtained as:
Figure QLYQS_2
Figure QLYQS_2
其中,“1”代表背景边框图,“0”代表眼球血管;Y为图像的亮度信息,等于图像亮度分量的灰度值,M(x′,y)为提取的背景边框,x′,y表示像素坐标;Among them, "1" represents the background border image, and "0" represents the blood vessels of the eyeball; Y is the brightness information of the image, which is equal to the grayscale value of the brightness component of the image, M(x′,y) is the extracted background border, and x′,y represent the pixel coordinates; 所述多尺度线性滤波器子模块基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,获得滤波后血管特征图像;The multiscale linear filter submodule is based on the multiscale linear filter of the Hessian matrix, and sets parameters according to the grayscale value and characteristic value of the blood vessels in the fundus area image, eliminates noise after filtering, and obtains the filtered blood vessel characteristic image; 眼底图像血管分割网络模块训练过程包括以下步骤:The training process of the fundus image blood vessel segmentation network module includes the following steps: 步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔A次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重;Step 201, using the filtered vascular feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the vascular segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every A iterations, saving the weights with segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network; 步骤202,仿生网络迭代训练的过程中,引导网络加载步骤201中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in step 201, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the loss function as a constraint condition during the fitting process; 步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,并跳转至步骤202进行迭代训练,每隔A次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重;Step 203, updating the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction where the loss function value becomes smaller through the back propagation method, and jumping to step 202 for iterative training, performing a segmentation accuracy test on the validation set every A iterations, saving the weights with accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network; 步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈R m×n′ , G∈R m×n′ is generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, the output feature map of the encoder layer is F 1 ∈R h×w×m , where h, w, m are the height, width and number of channels of the F 1 feature map, and the output feature map of the decoder layer is F 2 ∈R h×w×n′ , where h, w, n′ are the height, width and number of channels of the F 2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈R m×n′ are calculated as:
Figure QLYQS_3
Figure QLYQS_3
其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列;Wherein, s = 1, ..., h, t = 1, ..., w, x and W represent the weights of the input image and the guided/bionic network respectively; Ga,b (x; W) represents the ath row and bth column of the guided codec matrix or the bionic codec matrix; 步骤202具体包括以下步骤:Step 202 specifically includes the following steps: 对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is: Pj′=Pj×PcenterP j ′=P j ×P center 其中,步骤202中,j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个特征向量的残差相似度
Figure QLYQS_4
其中d代表区域的大小,H和W′分别表示特征向量的高和宽,获取相应的残差相似度
Figure QLYQS_5
选取不同的d值,将对应的残差相似度
Figure QLYQS_6
相加求和,得到第i个特征向量的最终残差相似度为
Figure QLYQS_7
Wherein, in step 202, j represents the coordinate of the d×d region, and a local representation is obtained for each pixel in the vascular feature image after the image is filtered, and then the local representations are connected along the channel dimension to obtain the residual similarity of the i-th feature vector
Figure QLYQS_4
Where d represents the size of the region, H and W′ represent the height and width of the feature vector respectively, and the corresponding residual similarity is obtained.
Figure QLYQS_5
Select different d values and convert the corresponding residual similarity
Figure QLYQS_6
Add and sum, and the final residual similarity of the i-th eigenvector is
Figure QLYQS_7
引导网络生成的引导编解码矩阵为
Figure QLYQS_8
引导残差相似度为
Figure QLYQS_9
仿生网络生成的仿生编解码矩阵
Figure QLYQS_10
仿生网络生成的仿生残差相似度为和
Figure QLYQS_11
i=1,...,n;信息迁移任务的损失函数为:
The guided encoding and decoding matrix generated by the guided network is
Figure QLYQS_8
The guided residual similarity is
Figure QLYQS_9
Bionic codec matrix generated by bionic network
Figure QLYQS_10
The similarity of the bionic residual generated by the bionic network is and
Figure QLYQS_11
i=1,...,n; the loss function of the information transfer task is:
Figure QLYQS_12
Figure QLYQS_12
其中,Wt为引导网络权重,Ws仿生网络权重,
Figure QLYQS_13
代表引导网络第i个特征向量的编解码矩阵,
Figure QLYQS_14
代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的的个数;λi和βi代表对应损失项的权重因子,N代表数据点的数量;
Among them, Wt is the guided network weight, Ws is the bionic network weight,
Figure QLYQS_13
represents the encoding and decoding matrix of the i-th eigenvector of the guided network,
Figure QLYQS_14
represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors; λ i and β i represent the weight factors of the corresponding loss terms, and N represents the number of data points;
所述基于有序分类的眼底图像智能分析预测模块对分割后的血管图像进行状态变化预测,将有序的多分类问题依次分割为若干个二分类问题;The fundus image intelligent analysis and prediction module based on ordered classification predicts the state change of the segmented vascular image, and sequentially divides the ordered multi-classification problem into a number of binary classification problems; 有序分类的损失函数为:The loss function for ordered classification is:
Figure QLYQS_15
Figure QLYQS_15
其中N代表数据点的数量,T代表二元分类任务,γt代表第t个二元分类任务的权重,
Figure QLYQS_16
代表第c个样本相对于第t个二元分类任务的输出,
Figure QLYQS_17
代表第c个样本的第t个二元分类任务的真实标签,
Figure QLYQS_18
代表第t个二元分类任务中第c个样本的权重,Wt表示第t个二元分类任务分类器的参数,xc代表第c个输入向量,
Figure QLYQS_19
表示概率模型。
Where N represents the number of data points, T represents the binary classification task, and γt represents the weight of the t-th binary classification task.
Figure QLYQS_16
represents the output of the c-th sample relative to the t-th binary classification task,
Figure QLYQS_17
represents the true label of the t-th binary classification task for the c-th sample,
Figure QLYQS_18
represents the weight of the cth sample in the tth binary classification task, Wt represents the parameters of the tth binary classification task classifier, xc represents the cth input vector,
Figure QLYQS_19
Represents a probability model.
2.一种基于信息迁移和有序分类的视网膜图像智能分析方法,其特征在于,包括以下步骤:2. A retinal image intelligent analysis method based on information migration and ordered classification, characterized by comprising the following steps: 步骤1,对采集到的眼底图像进行图像预处理;Step 1, performing image preprocessing on the collected fundus image; 步骤2,基于信息迁移进行眼底图像血管分割;对预处理后的眼底图像进行血管分割,获得血管图像;Step 2, performing blood vessel segmentation on the fundus image based on information migration; performing blood vessel segmentation on the preprocessed fundus image to obtain a blood vessel image; 步骤3,基于有序分类对血管图像进行状态变化预测;Step 3, predicting the state change of the vascular image based on the ordered classification; 步骤1具体包括以下步骤:Step 1 specifically includes the following steps: S101,对采集的原始眼底图像进行双树复小波和改进后的top-hat变换,获得图像增强后的眼底图像;S101, performing dual-tree complex wavelet transform and improved top-hat transform on the collected original fundus image to obtain an image-enhanced fundus image; 改进后的top-hat变换具体包括以下步骤:对双树复小波变换后的眼底图像进行开运算,获得开运算眼底图像,变换后的眼底图像减去开运算眼底图像时,灰度值发生变换的数据保持不变,未变化的数据为相减结果;The improved top-hat transform specifically includes the following steps: performing an opening operation on the fundus image after the dual-tree complex wavelet transform to obtain an opening operation fundus image, when the transformed fundus image is subtracted from the opening operation fundus image, the data with the transformed grayscale value remains unchanged, and the unchanged data is the subtraction result; S102,基于空间亮度信息进行视场提取,把图像增强后的眼底图像从RGB格式转换到YIQ格式,设置分割阈值,提取周围黑色视场,通过腐蚀操作得到有用信息区域,实现眼底区域和背景区域的分离,得到掩膜图像,用掩膜图像和图像增强后眼底图像进行相乘得到眼底区域图像;S102, extracting the field of view based on the spatial brightness information, converting the image-enhanced fundus image from the RGB format to the YIQ format, setting a segmentation threshold, extracting the surrounding black field of view, obtaining the useful information area through an erosion operation, separating the fundus area from the background area, obtaining a mask image, and multiplying the mask image and the image-enhanced fundus image to obtain a fundus area image; S103,基于Hessian矩阵的多尺度线性滤波器,根据眼底区域图像中血管的灰度值和特征值不同设置参数,滤波之后消除噪声,获得滤波后血管特征图像;S103, a multi-scale linear filter based on the Hessian matrix, setting parameters according to the grayscale values and characteristic values of the blood vessels in the fundus area image, eliminating noise after filtering, and obtaining a filtered blood vessel characteristic image; 步骤2具体包括以下步骤:Step 2 specifically includes the following steps: 步骤201,使用滤波后血管特征图像和分割标签作为引导网络和仿生网络的共同输入,训练引导网络进行血管分割任务,引导网络迭代训练的过程中,每隔A次迭代之后在验证集上进行分割准确率测试,保存分割精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为引导网络的最优权重;Step 201, using the filtered vascular feature image and the segmentation label as the common input of the guide network and the bionic network, training the guide network to perform the vascular segmentation task, during the iterative training of the guide network, performing a segmentation accuracy test on the validation set after every A iterations, saving the weights with segmentation accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the optimal weight of the guide network; 步骤202,仿生网络迭代训练的过程中,引导网络加载步骤201中保存的引导网络最优权重,对滤波后血管特征图像生成引导编解码矩阵和引导残差相似度,仿生网络对滤波后血管特征图像生成对应的仿生编解码矩阵和仿生相似度矩阵,仿生网络拟合分割标签、引导编解码矩阵和引导残差相似度,拟合过程中使用损失函数作为约束条件;Step 202, during the iterative training of the bionic network, the guide network loads the guide network optimal weights saved in step 201, generates a guide encoding and decoding matrix and a guide residual similarity for the filtered vascular feature image, the bionic network generates a corresponding bionic encoding and decoding matrix and a bionic similarity matrix for the filtered vascular feature image, the bionic network fits the segmentation label, the guide encoding and decoding matrix and the guide residual similarity, and uses the loss function as a constraint condition during the fitting process; 步骤203,通过反向传播方法向所述损失函数值变小的方向更新仿生网络的仿生编解码矩阵、仿生残差相似度参数和分割网络参数,并跳转至步骤202进行迭代训练,每隔A次迭代,在验证集上进行分割准确率测试,保存精度大于设定的分割阈值的权重,迭代训练完成后,选择分割准确率最高的权重为仿生网络的最终最优权重;Step 203, updating the bionic encoding and decoding matrix, bionic residual similarity parameters and segmentation network parameters of the bionic network in the direction where the loss function value becomes smaller through the back propagation method, and jumping to step 202 for iterative training, performing a segmentation accuracy test on the validation set every A iterations, saving the weights with accuracy greater than the set segmentation threshold, and after the iterative training is completed, selecting the weight with the highest segmentation accuracy as the final optimal weight of the bionic network; 步骤202中引导编解码矩阵和仿生编解码矩阵为G∈Rm×n′,G∈Rm×n′通过引导/仿生编码器和引导/仿生解码器的特征图生成,编码器层的输出特征图为F1∈Rh×w×m,其中h,w,m分别F1特征图的高、宽和通道数,解码器层的输出特征图为F2∈Rh×w×n′,其中h,w,n′分别表示F2特征图的高,宽和通道数,引导编解码矩阵和仿生编解码矩阵G∈Rm×n′计算为:In step 202, the guided codec matrix and the bionic codec matrix are G∈R m×n′ , G∈R m×n′ is generated by the feature maps of the guided/bionic encoder and the guided/bionic decoder, the output feature map of the encoder layer is F 1 ∈R h×w×m , where h, w, m are the height, width and number of channels of the F 1 feature map, and the output feature map of the decoder layer is F 2 ∈R h×w×n′ , where h, w, n′ are the height, width and number of channels of the F 2 feature map, respectively. The guided codec matrix and the bionic codec matrix G∈R m×n′ are calculated as:
Figure QLYQS_20
Figure QLYQS_20
其中,s=1,...,h,t=1,...,w,x和W分别代表输入图像和引导/仿生网络的权重;Ga,b(x;W)表示引导编解码矩阵或者仿生编解码矩阵的第a行第b列;Wherein, s = 1, ..., h, t = 1, ..., w, x and W represent the weights of the input image and the guided/bionic network respectively; Ga,b (x; W) represents the ath row and bth column of the guided codec matrix or the bionic codec matrix; 步骤202具体包括以下步骤:Step 202 specifically includes the following steps: 对于第i个特征向量Y(i),通过每个中心像素Pcenter与其相邻的d×d区域里面像素Pj之间的元素乘法计算相似度值Pj′,公式为:For the i-th feature vector Y (i) , the similarity value P j ′ is calculated by element-wise multiplication between each central pixel P center and the pixels P j in its adjacent d×d region. The formula is: Pj′=Pj×PcenterP j ′=P j ×P center 其中j表示d×d区域的坐标,对图像滤波后血管特征图像中每个像素,获得局部表示,然后沿着通道维度对所述局部表示进行连接,得到第i个输入的特征向量的残差相似度
Figure QLYQS_21
其中d代表自定义区域的大小,H和W′分别表示特征向量的高和宽,分别获取相应的残差相似度
Figure QLYQS_22
将残差相似度
Figure QLYQS_23
相加求和,第i个特征向量的最终残差相似度为
Figure QLYQS_24
Where j represents the coordinate of the d×d region. For each pixel in the vascular feature image after image filtering, a local representation is obtained, and then the local representations are connected along the channel dimension to obtain the residual similarity of the feature vector of the i-th input
Figure QLYQS_21
Where d represents the size of the custom region, H and W′ represent the height and width of the feature vector, respectively, and the corresponding residual similarity is obtained.
Figure QLYQS_22
The residual similarity
Figure QLYQS_23
Add and sum, the final residual similarity of the i-th eigenvector is
Figure QLYQS_24
引导网络生成的引导编解码矩阵为
Figure QLYQS_25
引导残差相似度为
Figure QLYQS_26
仿生网络生成的仿生编解码矩阵
Figure QLYQS_27
仿生网络生成的仿生残差相似度为和
Figure QLYQS_28
i=1,...,n;信息迁移任务的损失函数为:
The guided encoding and decoding matrix generated by the guided network is
Figure QLYQS_25
The guided residual similarity is
Figure QLYQS_26
Bionic codec matrix generated by bionic network
Figure QLYQS_27
The similarity of the bionic residual generated by the bionic network is and
Figure QLYQS_28
i=1,...,n; the loss function of the information transfer task is:
Figure QLYQS_29
Figure QLYQS_29
其中,Wt为引导网络权重,Ws仿生网络权重,Gi T代表引导网络第i个特征向量的编解码矩阵,Gi S代表仿生网络第i个特征向量的编解码矩阵,n表示特征向量的个数;λi和βi代表对应损失项的权重因子,N代表数据点的数量;Where Wt is the weight of the guided network , Ws is the weight of the bionic network, GiT represents the encoding and decoding matrix of the i-th eigenvector of the guided network, GiS represents the encoding and decoding matrix of the i-th eigenvector of the bionic network, n represents the number of eigenvectors; λi and βi represent the weight factors of the corresponding loss terms, and N represents the number of data points; 步骤S3对于分割后的血管图像进行状态变化预测,将有序的多分类问题依次分割为若干个二分类问题;Step S3 predicts the state change of the segmented blood vessel image, and sequentially divides the ordered multi-classification problem into several binary classification problems; 有序分类的损失函数为:The loss function for ordered classification is:
Figure QLYQS_30
Figure QLYQS_30
其中T代表二元分类任务,γt代表第t个二元分类任务的权重,
Figure QLYQS_31
代表第c个样本相对于第t个二元分类任务的输出,
Figure QLYQS_32
代表第c个样本的第t个二元分类任务的真实标签,
Figure QLYQS_33
代表第t个二元分类任务中第c个样本的权重,Wt表示第t个二元分类任务分类器的参数,xc代表第c个输入向量,
Figure QLYQS_34
表示概率模型,最终的分类结果依据每一个二分类任务的分类结果来进行整合判断。
Where T represents the binary classification task, γt represents the weight of the t-th binary classification task,
Figure QLYQS_31
represents the output of the c-th sample relative to the t-th binary classification task,
Figure QLYQS_32
represents the true label of the t-th binary classification task for the c-th sample,
Figure QLYQS_33
represents the weight of the cth sample in the tth binary classification task, Wt represents the parameters of the tth binary classification task classifier, xc represents the cth input vector,
Figure QLYQS_34
Represents a probability model. The final classification result is integrated and judged based on the classification results of each binary classification task.
CN202210367584.0A 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification Active CN114663421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210367584.0A CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210367584.0A CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Publications (2)

Publication Number Publication Date
CN114663421A CN114663421A (en) 2022-06-24
CN114663421B true CN114663421B (en) 2023-04-28

Family

ID=82034615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210367584.0A Active CN114663421B (en) 2022-04-08 2022-04-08 Retina image analysis system and method based on information migration and ordered classification

Country Status (1)

Country Link
CN (1) CN114663421B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541908B (en) * 2024-01-10 2024-04-05 华芯程(杭州)科技有限公司 Training method, device and prediction method for optical detection image prediction model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE112005002929A5 (en) * 2004-09-21 2007-08-30 Imedos Gmbh Method and apparatus for retinal vascular analysis using digital images
CN108986106B (en) * 2017-12-15 2021-04-16 浙江中医药大学 Automatic segmentation of retinal blood vessels for glaucoma
CN110473188B (en) * 2019-08-08 2022-03-11 福州大学 Fundus image blood vessel segmentation method based on Frangi enhancement and attention mechanism UNet
CN110930418B (en) * 2019-11-27 2022-04-19 江西理工大学 Retina blood vessel segmentation method fusing W-net and conditional generation confrontation network
CN111598894B (en) * 2020-04-17 2021-02-09 哈尔滨工业大学 Retinal Vascular Image Segmentation System Based on Global Information Convolutional Neural Network
CN114283158A (en) * 2021-12-08 2022-04-05 重庆邮电大学 Retinal blood vessel image segmentation method and device and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN112233135A (en) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 Retinal vessel segmentation method in fundus image and computer-readable storage medium

Also Published As

Publication number Publication date
CN114663421A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
US10482603B1 (en) Medical image segmentation using an integrated edge guidance module and object segmentation network
CN114287878B (en) A method for diabetic retinopathy lesion image recognition based on attention model
Kou et al. An enhanced residual U-Net for microaneurysms and exudates segmentation in fundus images
CN111784671A (en) Pathological image lesion area detection method based on multi-scale deep learning
JP2020518915A (en) System and method for automated fundus image analysis
Zhang et al. Two-step registration on multi-modal retinal images via deep neural networks
CN109919915A (en) Retina fundus image abnormal region detection method and device based on deep learning
Shamrat et al. An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
Yang et al. RADCU-Net: Residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation
CN113781488A (en) Method, device and medium for segmentation of tongue image
CN116503422A (en) A method for optic cup and disc segmentation based on attention mechanism and multi-scale feature fusion
Jiang et al. MFI-Net: A multi-resolution fusion input network for retinal vessel segmentation
Ogiela et al. Natural user interfaces in medical image analysis
Wang et al. SERR‐U‐Net: Squeeze‐and‐Excitation Residual and Recurrent Block‐Based U‐Net for Automatic Vessel Segmentation in Retinal Image
Huang et al. DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel
Tian et al. Learning discriminative representations for fine-grained diabetic retinopathy grading
CN114663421B (en) Retina image analysis system and method based on information migration and ordered classification
Wang et al. An efficient hierarchical optic disc and cup segmentation network combined with multi-task learning and adversarial learning
Zhang et al. CIDN: A context interactive deep network with edge-aware for X-ray angiography images segmentation
CN118116576B (en) Intelligent case analysis method and system based on deep learning
Radha et al. Unfolded deep kernel estimation-attention UNet-based retinal image segmentation
Bhuvaneswari et al. Contrast enhancement of retinal images using green plan masking and whale optimization algorithm
CN118657800A (en) Joint segmentation method of multiple lesions in retinal OCT images based on hybrid network
Rajarajeshwari et al. Application of artificial intelligence for classification, segmentation, early detection, early diagnosis, and grading of diabetic retinopathy from fundus retinal images: A comprehensive review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant