[go: up one dir, main page]

CN111340789A - Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels - Google Patents

Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels Download PDF

Info

Publication number
CN111340789A
CN111340789A CN202010134390.7A CN202010134390A CN111340789A CN 111340789 A CN111340789 A CN 111340789A CN 202010134390 A CN202010134390 A CN 202010134390A CN 111340789 A CN111340789 A CN 111340789A
Authority
CN
China
Prior art keywords
blood vessel
arteriovenous
feature map
fundus image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010134390.7A
Other languages
Chinese (zh)
Other versions
CN111340789B (en
Inventor
柳杨
王瑞
王立龙
吕彬
吕传峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010134390.7A priority Critical patent/CN111340789B/en
Publication of CN111340789A publication Critical patent/CN111340789A/en
Priority to PCT/CN2020/099538 priority patent/WO2021169128A1/en
Application granted granted Critical
Publication of CN111340789B publication Critical patent/CN111340789B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for identifying and quantifying retinal blood vessels of an eye, wherein the method comprises the following steps: inputting an original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain a target characteristic diagram; performing optic disc segmentation based on the target feature map; segmenting an original fundus image to obtain an arteriovenous blood vessel identification result; positioning the region of interest based on the optic disc segmentation result; extracting a blood vessel central line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel central line, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section; and acquiring the blood vessel diameter of each blood vessel section after the category information is corrected based on the extracted blood vessel central line, and then quantifying the arteriovenous blood vessel in the region of interest. The embodiment of the application is favorable for improving the identification precision of the eye fundus retina arteriovenous blood vessels, and further improves the quantization precision.

Description

Method, device, equipment and storage medium for identifying and quantifying eye fundus retinal blood vessels
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for identifying and quantifying retinal blood vessels.
Background
Fundus retinal artery and vein vessels have been the key field of medical research, especially fundus retinal artery and vein vessels within 1pd-1.5pd (disc Diameter) from the center of the optic disc, and the change of the caliber ratio or the form of the fundus retinal artery and vein vessels is the basis for early diagnosis of various systemic and blood diseases, such as cardiovascular diseases, diabetes, hypertension and the like. The arteriovenous vessels need to be accurately classified by calculating the vessel diameter ratio of the arteriovenous vessels of the fundus retina, and in the traditional fundus retina diagnosis, a doctor observes fundus images and obtains a diagnosis result through self medical experience. With the development of computer image processing technology, fundus color photographs are used for extracting retinal vessels of the eye much at present, and the fundus color photographs have the characteristics of non-uniform brightness, complex color interlacing of blood vessels and background colors, small arteriovenous difference and the like, so that certain difficulty is caused for identification and classification of arteriovenous vessels.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, a device, and a storage medium for identifying and quantifying retinal vessels, which are beneficial to improving the identification precision of retinal arteriovenous vessels of the fundus oculi, and further improving the quantification precision.
In order to achieve the above object, a first aspect of the embodiments of the present application provides a method for identifying and quantifying retinal vessels, the method including:
inputting an original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic graphs of multiple scales;
performing optic disc segmentation based on the target feature map to obtain optic disc segmentation results;
segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel identification result;
positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result;
extracting a blood vessel center line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel center line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected;
and acquiring the vessel diameter of each vessel section corrected by the arteriovenous category information by adopting a boundary detection method based on the vessel center line of each vessel section corrected by the arteriovenous category information, and calculating the diameter ratio of the arterial vessel and the venous vessel in the region of interest according to the vessel diameter.
With reference to the first aspect, in a possible implementation manner, the inputting an original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain a target feature map at multiple scales includes:
inputting the original fundus image into an encoder part of the U-shaped convolution neural network model to perform key feature extraction to obtain a high-dimensional feature map;
and inputting the high-dimensional characteristic diagram into a decoder part of the U-shaped convolutional neural network model for up-sampling operation, and outputting target characteristic diagrams of multiple scales.
With reference to the first aspect, in a possible implementation manner, the inputting the original fundus image into an encoder portion of the U-shaped convolutional neural network model to perform key feature extraction, so as to obtain a high-dimensional feature map includes:
performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
performing maximum pooling operation on the feature map obtained through convolution processing, reducing the size of the feature map layer by layer, and performing alternate processing on a plurality of convolution layers and pooling layers to obtain the high-dimensional feature map;
the step of inputting the high-dimensional feature map into a decoder part of the U-shaped convolutional neural network model for up-sampling operation and outputting target feature maps of multiple scales comprises the following steps:
carrying out up-sampling operation on the high-dimensional feature map, and amplifying the size of the high-dimensional feature map layer by layer;
combining the low-dimensional features extracted from each network layer in the encoding stage with the high-dimensional features symmetrically extracted in the decoding stage through a jump connection layer to obtain an initial feature map of each network layer, wherein the initial feature map of each network layer is different in scale;
and outputting the initial characteristic diagram of each network layer through the output branch of each network layer to obtain a plurality of scales of target characteristic diagrams, wherein an attention mechanism is added into the output branch of each network layer.
With reference to the first aspect, in a possible implementation manner, the performing, based on the target feature map, a disc segmentation result to obtain a disc segmentation result includes:
fusing the target characteristic graph to obtain an image to be segmented;
performing candidate frame regression processing on the image to be segmented to position the optic disc position in the image to be segmented and output the boundary frame information of the optic disc;
and cutting out the calibrated image blocks of the video disc area according to the information of the boundary frame of the video disc, inputting the image blocks into a pre-trained U-shaped segmentation network, and outputting the video disc segmentation result through feature extraction and up-sampling operation.
In combination with the first aspect, in one possible embodiment,
the method for segmenting the original fundus image by adopting the pre-trained cascade segmentation network model to obtain the arteriovenous blood vessel recognition result comprises the following steps:
extracting a green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a contrast-enhanced green channel image;
cutting the contrast-enhanced green channel image into a plurality of fundus image blocks;
and inputting the fundus image blocks into a preset cascade segmentation network model for segmentation to obtain an arteriovenous blood vessel identification result.
In combination with the first aspect, in one possible embodiment,
the method for acquiring the blood vessel diameter of each blood vessel section after the arteriovenous classification information is corrected by adopting a boundary detection method comprises the following steps:
traversing in a rectangular area with the pixel range of 40 × 40 by taking the center point of the blood vessel as the center of a circle, searching a boundary point which is closest to the center line of each blood vessel section after the arteriovenous classification information correction, and taking the distance between the boundary point which is closest to the center line of the blood vessel section and the center point of the blood vessel as the radius r to obtain the blood vessel diameter 2r of each blood vessel section after the arteriovenous classification information correction.
With reference to the first aspect, in one possible implementation, the following formula is used to calculate the ratio of the diameters of the arterial vessel and the venous vessel in the region of interest:
Figure BDA0002396000640000031
wherein AVR represents the diameter ratio of arterial blood vessels and venous blood vessels in the region of interest, CRAE represents the ratio in retinaThe equivalent value of the diameter of the central artery vessel,
Figure BDA0002396000640000032
Aiand AjRespectively representing the acquired maximum arterial vessel diameter and minimum arterial vessel diameter of the region of interest, and 0.88 is a fixed coefficient; CRVE represents the retinal central venous vessel diameter equivalent,
Figure BDA0002396000640000041
Viand VjRespectively representing the maximum vein vessel diameter and the minimum vein vessel diameter of the acquired region of interest, and 0.95 is a fixed coefficient.
A second aspect of the embodiments of the present application provides a device for identifying and quantifying retinal blood vessels, the device including:
the characteristic extraction module is used for inputting the original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic graphs of multiple scales;
the optic disc segmentation module is used for carrying out optic disc segmentation based on the target characteristic graph to obtain an optic disc segmentation result;
the blood vessel recognition module is used for segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel recognition result;
the region positioning module is used for positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result;
the central line extraction module is used for extracting a blood vessel central line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel central line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected;
and the diameter ratio calculation module is used for acquiring the blood vessel diameter of each blood vessel section corrected by the arteriovenous category information by adopting a boundary detection method based on the blood vessel center line of each blood vessel section corrected by the arteriovenous category information, and calculating the diameter ratio of the arterial blood vessel and the venous blood vessel in the region of interest according to the blood vessel diameter.
A third aspect of embodiments of the present application provides an electronic device, including: the device comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the steps of the fundus retinal blood vessel identification and quantification method.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps in the fundus retinal blood vessel identification and quantification method described above.
The above scheme of the present application includes at least the following beneficial effects: performing feature extraction on an original fundus image to obtain target feature maps of multiple scales, performing optic disc segmentation based on the obtained target feature maps to obtain optic disc segmentation results, segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain arteriovenous blood vessel identification results, performing region-of-interest positioning based on the optic disc segmentation results, extracting a blood vessel center line according to arteriovenous blood vessel identification results, detecting key points in the blood vessel center line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, correcting arteriovenous category information on each blood vessel section, acquiring the blood vessel diameter of each blood vessel section after category information correction by adopting a boundary detection method based on the extracted blood vessel center line, calculating the diameter ratio of arterial blood vessels and venous blood vessels in the region-of interest according to the acquired blood vessel diameter, therefore, the identification precision of the fundus retina arteriovenous blood vessels is improved, and the quantification precision is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application architecture provided by an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for identifying and quantifying retinal vessels in a eye according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a video disc segmentation network according to an embodiment of the present application;
FIG. 4-a is a schematic flow chart of downsampling according to an embodiment of the present application;
4-b are exemplary diagrams of a U-shaped convolutional neural network model encoder portion provided by an embodiment of the present application;
fig. 5-a is a schematic flow chart of upsampling provided by an embodiment of the present application;
5-b are exemplary diagrams of a model decoder portion of a U-shaped convolutional neural network provided in an embodiment of the present application;
FIG. 6 is an exemplary diagram of a optic disc segmentation provided by an embodiment of the present application;
fig. 7 is an exemplary diagram of a cascaded split network model provided in an embodiment of the present application;
fig. 8 is an exemplary diagram of an expected output area a and an actual output area B of a U-shaped split network according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of a process for performing arteriovenous vessel identification based on an original fundus image according to an embodiment of the present application;
FIG. 10 is an exemplary diagram of a segmented fundus image block provided in an embodiment of the present application;
FIG. 11-a is an exemplary diagram of a region of interest localization result provided by an embodiment of the present application;
11-b are exemplary views of a vessel centerline provided by an embodiment of the present application;
11-c are exemplary diagrams of a vessel centerline keypoint detection provided by an embodiment of the present application;
fig. 12 is a schematic structural diagram of an apparatus for identifying and quantifying retinal blood vessels in a eye according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
First, a network system architecture to which the solution of the embodiments of the present application may be applied will be described by way of example with reference to the accompanying drawings. Referring to fig. 1, fig. 1 is an application architecture diagram provided in an embodiment of the present application, and as shown in fig. 1, the application architecture diagram includes an electronic device, an image capturing device, and a database, where the image capturing device and the database are respectively connected and communicated with the electronic device through a network, and the electronic device includes a processor, and the electronic device may be any device capable of implementing a fundus retinal blood vessel identification and quantification method provided in an embodiment of the present application, for example: a supercomputer in a medical research room, a computer in a hospital examination room, a server, etc. The image acquisition device may be any device capable of acquiring a fundus image, for example: the color fundus camera and the like, in an application scene, in a hospital ophthalmologic examination room, after an image acquisition device shoots a fundus image of a person to be examined, the fundus image is transmitted to an electronic device through a network, and the electronic device executes the fundus retinal blood vessel identification and quantification method provided by the application to identify and quantify the fundus image and output a quantification result. The database may be a local database, or an external database, the local database is a database of an enterprise, a hospital, a research room, or the like, the external database is a commonly used fundus image database published at home and abroad, for example: in another application scenario, when a staff in a research room needs to test the fundus retinal blood vessel identification and quantification method provided by the present application, a fundus image for testing can be obtained from the database through a network, and a testing operation is performed by an electronic device. Of course, when the above-mentioned database is a local database, the image pickup device may establish a connection with the database, and store the photographed fundus image in the database.
Referring to fig. 2, fig. 2 is a schematic flowchart of a method for identifying and quantifying retinal vessels in a retina according to an embodiment of the present application, and fig. 2 includes steps S21-S27:
s21, an original fundus image is acquired.
In the embodiment of the present application, the original fundus image may be a fundus image acquired in real time by an image acquisition device anywhere, for example: fundus images of a subject collected in a medical laboratory, fundus images of a patient collected in a hospital laboratory, and the like, and of course, the original fundus image may also be a fundus image in an open source database such as DRIVE, and the like, and is not particularly limited.
And S22, inputting the original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic maps of multiple scales.
In the embodiment of the present application, fig. 3 is a schematic structural diagram of an optical disc segmentation network, in which a main portion of the optical disc segmentation network is a U-shaped convolutional neural network model on the left side (from the input of an original fundus image to the output of a target feature map), except for a most basic input layer and an output layer, the U-shaped convolutional neural network model includes a plurality of hidden layers which are in a symmetrical structure, and an encoder portion (a downsampling portion) and a decoder portion (an upsampling portion) which constitute the U-shaped convolutional neural network model.
In an embodiment, the inputting an original fundus image into a pre-trained U-shaped convolutional neural network model for processing to obtain a target feature map with multiple scales includes:
a: inputting the original fundus image into an encoder part of the U-shaped convolution neural network model to perform key feature extraction to obtain a high-dimensional feature map;
as shown in fig. 4-a, step a includes:
s41, performing convolution processing on the original fundus image to extract key features, and obtaining a feature map with the same size as the original fundus image;
and S42, performing maximum pooling operation on the feature map obtained through the convolution processing, reducing the size of the feature map layer by layer, and performing alternating processing on a plurality of convolution layers and pooling layers to obtain the high-dimensional feature map.
In the embodiments of the present application, the key features are features with strong representation ability, such as: a feature with a large pixel value. As shown in fig. 4-b, the original fundus image is input into the encoder portion of the U-type convolutional neural network model, and is first subjected to convolutional processing by the convolutional layer conv to obtain a feature map with the same size as the original fundus image after the initial convolutional operation, such as C1 in fig. 3, then the feature map C1 is subjected to downsampling maximum pooling operation by the maximum pooling layer Max-pooling, the size of the feature map C1 is reduced to obtain a feature map C2, the feature map C2 is subjected to processing by the convolutional layer conv and the maximum pooling layer Max-pooling to obtain a feature map C3, and thus a high-dimensional feature map with lower resolution is obtained by alternating processing of the convolutional layer conv and the maximum pooling layer Max-pooling to form a feature pyramid from low dimension to high dimension. The convolution process of the encoder portion may use a convolution kernel of 3 × 3, the step size may be 2, and in the maximum pooling operation, the feature map is reduced by a preset multiple, for example, if the downsampling is performed by 2 times, the size of the feature map C1 is 48 × 48, and after one maximum pooling operation, the feature map C2 is obtained, then the size of the feature map C2 is 24 × 24, and the size of the feature map C3 is 12 similarly.
B: and inputting the high-dimensional characteristic diagram into a decoder part of the U-shaped convolutional neural network model for up-sampling operation, and outputting target characteristic diagrams of multiple scales.
As shown in fig. 5-a, step B includes:
s51, performing up-sampling operation on the high-dimensional feature map, and amplifying the size of the high-dimensional feature map layer by layer;
s52, merging the low-dimensional features extracted from each network layer in the encoding stage and the high-dimensional features symmetrically extracted in the decoding stage through a jump connection layer to obtain an initial feature map of each network layer, wherein the initial feature maps of each network layer are different in scale;
and S53, outputting the initial characteristic diagram of each network layer through the output branch of each network layer to obtain a target characteristic diagram with a plurality of scales, wherein an attention mechanism is added into the output branch of each network layer.
In the embodiment of the present application, as shown in fig. 5-b, the decoder portion of the U-shaped convolutional neural network model performs an upsampling operation on the high-dimensional feature map obtained by downsampling the encoder portion, and also amplifies the size of the high-dimensional feature map by a preset multiple, for example, the feature map P5 in fig. 3 is amplified to the size of the feature map P4 after one upsampling operation, and if the original size of the feature map P5 is 12 × 12, the size of the feature map P4 is 24 × 24, and the upsampling may be performed by a commonly used interpolation method, for example: nearest neighbor interpolation, bilinear interpolation, mean interpolation, etc., and are not particularly limited. After a high-dimensional feature map with an enlarged size is obtained each time, a low-dimensional feature map extracted by the same network layer in an encoding stage and a corresponding high-dimensional feature map are merged, for example, the feature map C2 and the feature map P2 in fig. 3 belong to feature maps symmetrically extracted by the same network layer in the encoding stage and the decoding stage, and the initial feature map of the network layer is obtained by merging C2 and P2 through a skip-connection layer (skip-connection). At this time, the initial feature maps of each network layer are output through an output branch added with an attention mechanism, so that target feature maps of multiple scales are obtained, wherein the attention mechanism is an SE module, and the SE module completes recalibration of the initial feature maps of multiple scales in channel dimension through Squeeze operation, Excitation operation and weight operation, so that the effect of the output target feature maps is better, and the problems of unclear video disc boundary and different sizes of video disc areas can be well solved.
And S23, performing optic disc segmentation based on the target feature map to obtain optic disc segmentation results.
In the embodiment of the application, a two-stage optic disc segmentation network is adopted to perform optic disc segmentation, a U-shaped convolution neural network model is adopted to perform feature extraction in the first stage, and optic disc segmentation is performed in the second stage on the basis of the features extracted in the first stage. Specifically, the performing optic disc segmentation based on the output target feature map to obtain an optic disc segmentation result includes: and fusing the output target feature maps to obtain an image to be segmented, performing frame candidate regression processing on the image to be segmented to position the optic disc position in the image to be segmented, outputting the information of a boundary frame of the optic disc, cutting out a calibrated image block of the optic disc region according to the information of the boundary frame of the optic disc, inputting the image block into a pre-trained U-shaped segmentation network, and outputting the optic disc segmentation result through feature extraction and up-sampling operations.
As shown in fig. 3, after the U-type convolutional neural network model outputs target feature maps of multiple scales, the target feature maps are fused to obtain an image to be segmented with better resolution, the image to be segmented is subjected to feature extraction through a candidate frame regression module ROI Align, each candidate region is traversed according to the extracted feature map without quantizing the floating point number boundary, then as shown in fig. 6, the candidate region is segmented into n rectangular units without quantizing each unit boundary, four coordinate positions are determined in each rectangular unit according to a fixed rule, the values of the four positions are calculated by using a bilinear interpolation method, the maximum pooling operation is performed to obtain the feature map on the right side of fig. 6, then flattening processing is performed through a scatter module, the boundary frame box of the optic disc is output, the image blocks of the optic disc region are cropped according to the boundary frame of the optic disc, the image blocks of the cropped optic disc region are input into the U-type convolutional neural network for optic disc segmentation, and obtaining a final optic disc segmentation image. The two-stage optic disc segmentation model design is beneficial to eliminating the interference of eyeground highlight noise on optic disc segmentation caused by poor shooting environment or shooting technology, and obtaining optic disc segmentation results with higher precision.
And S24, segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel identification result.
In the embodiment of the present application, the vein segmentation and identification are performed by using a cascaded segmentation network model, in order to improve the accuracy of vein identification, as shown in fig. 7, a three-time U-shaped segmentation network cascade mode is adopted here, which is respectively a first U-shaped segmentation network, a second U-shaped segmentation network and a third U-shaped segmentation network, each U-shaped segmentation network has a structure as shown in an enlarged view on the lower side of fig. 7, the left side is a feature extraction part, the right side is an up-sampling part, the left side adopts 2 × 2 maximal pooling, the right side adopts 2 × 2 deconvolution, and both sides adopt 3 × 3 convolution kernels for feature extraction, the whole U-shaped segmentation network has no full connection layer, and only necessary convolution layers are connected. In the whole training process of the cascade segmentation network model, the Loss function Dice Loss of each U-shaped segmentation network needs to be considered, a Dice value is calculated firstly, and the following formula is adopted:
Figure BDA0002396000640000101
as shown in fig. 8, Dice represents an overlapping degree between the expected output area a and the actual output area B of each U-shaped segmentation network, smooths is a smoothing coefficient, and is set to be 1 by default; calculating a loss function of each U-shaped segmentation network according to the Dice value, and adopting a formula: l isk=Dice Loss=1-Dice,LkExpressing the loss function value of each U-shaped split network, and finally, calculating the loss function L of each U-shaped split networkkTo obtain the whole presetThe loss value of the cascade segmentation network model adopts the following formula:
Figure BDA0002396000640000102
wherein Loss represents the Loss value of the whole preset cascade segmentation network model, LkAnd K represents the number of the U-shaped segmentation networks in the preset cascade segmentation network model, wherein K is 3. The Loss value of the whole preset cascade segmentation network model is calculated to guide the preset cascade segmentation network model to carry out optimization training and obtain a more accurate arteriovenous blood vessel identification result, and the smaller the Loss value is, the more accurate the arteriovenous blood vessel identification result of the cascade segmentation network model is represented.
In one embodiment, as shown in fig. 9, the segmenting the original fundus image by using the pre-trained cascade segmentation network model to obtain the arteriovenous vessel recognition result includes steps S91-S93:
s91, extracting a green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a contrast-enhanced green channel image;
s92, cutting the contrast-enhanced green channel image into a plurality of fundus image blocks;
and S93, inputting the plurality of fundus image blocks into a preset cascade segmentation network model for segmentation, and acquiring an arteriovenous blood vessel identification result.
For an original fundus image in an RGB format, a green channel image with an obvious vascular structure is selected to perform histogram equalization processing, contrast is enhanced, then as shown in fig. 10, a Patch operation is adopted to cut the green channel image with enhanced contrast into two fundus image blocks Patch1 and Patch2, and then the two fundus image blocks Patch1 and Patch2 are input into a preset cascade segmentation network model for segmentation, of course, the number of fundus image blocks cut in actual operation is much larger, for example, Patch3, Patch4 and Patch5 may exist, and each cut fundus image block may have an overlapping part. Inputting the cut fundus image block into a first U-shaped segmentation network for feature extraction and upsampling to obtain a first output result, taking the first output result as the input of a second U-shaped segmentation network for feature extraction and upsampling to obtain a second output result, taking the second output result as the input of a third U-shaped segmentation network for feature extraction and upsampling to obtain an arteriovenous blood vessel identification result, wherein the arteriovenous blood vessel identification result comprises class information for arteriovenous blood vessel identification, such as: arterial vessel, venous vessel labeling, etc.
And S25, positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result.
In the embodiment of the present application, in the region of interest (ROI), i.e. machine vision and image processing, a region to be processed is delineated from a processed image in a manner of a square frame, a circle, an ellipse, an irregular polygon, etc., which is specifically referred to herein as an fundus region within a range of 1pd to 1.5pd (disc Diameter) from the center of the optic disc. On the basis of the optic disc segmentation result, ellipse fitting is carried out on the optic disc boundary, specifically, least square method is adopted for ellipse fitting, then the center of the optic disc is determined, the optic disc center is taken as the center of a circle, the eyeground region within the range of 1pd-1.5pd is positioned as the region of interest, and the location result of the region of interest is obtained and is used as the candidate region for subsequent caliber measurement, as shown in fig. 11-a.
S26, extracting a blood vessel center line according to the arteriovenous blood vessel identification result, detecting key points in the blood vessel center line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected.
In the specific embodiment of the application, firstly, the arteriovenous blood vessel identification result is binarized to generate a fundus blood vessel binary map, the fundus blood vessel binary map is input into a U-shaped segmentation network to extract the blood vessel center line, and a refined blood vessel center line map is output, as shown in fig. 11-b. Then, detecting each key point in the center line of the blood vessel by adopting a neighborhood connectivity judgment method, for example: the branch point and the branch point, the neighborhood connectivity determination method may be a determination method based on 8 neighborhood connectivity, the 8 neighborhood of the pixel p is represented by N8(p), for the pixel p and the pixel q having the pixel value x, if q is in the set of N8(p), the pixel p and the pixel q are determined to be 8 connected, the intersection point and the branch point as shown in fig. 11-c may be determined, and after the intersection point and the branch point are detected, the detected intersection point and the branch point are removed from the blood vessel center line to separate the blood vessel segments according to the blood vessel center line, and the blood vessel segments are independent from each other. And performing connectivity judgment on each blood vessel section, counting the artery and vein category information on each blood vessel section after determining that each blood vessel section has connectivity, and correcting the artery and vein category information on each blood vessel section by adopting a voting decision method to obtain each blood vessel section after the artery and vein category information is corrected. For example, if the number of pixels marked with artery information labels on a blood vessel segment is large, other pixels marked with vein information are marked with artery information to ensure that the pixel point category information on the same blood vessel segment is consistent. Compared with the traditional morphological refining operation, the vessel centerline extraction method based on deep learning avoids complex refining rules of manual design, reduces false positive branches in the vessel centerline, and can obtain more accurate centerline extraction results.
S27, based on the blood vessel central line of each blood vessel section corrected by the arteriovenous category information, obtaining the blood vessel diameter of each blood vessel section corrected by the arteriovenous category information by adopting a boundary detection method, and calculating the diameter ratio of the arterial blood vessel and the venous blood vessel in the region of interest according to the blood vessel diameter.
In the embodiment of the present application, on the basis of extracting the center line of the blood vessel in step S26, the blood vessel diameter of each blood vessel segment after the artery and vein type information correction in the region of interest is calculated by using a boundary detection method, specifically, traversal is performed in a rectangular region with a pixel size of 40 × 40 with the center point of the blood vessel as the center of the circle, a boundary point closest to the center line of each blood vessel segment after the artery and vein type information correction is found, and the distance between the boundary point and the center point of the blood vessel is used as the radius r, so as to obtain the blood vessel diameter 2r of each blood vessel segment after the artery and vein type information correction.
According to the vessel diameter of each vessel section in the region of interest, calculating CRAE (central arterially equivalent diameter) and CRVE (central arterially equivalent diameter equivalent value) respectively by using a medical Parr-Hubbard-Knudtson formula, and then calculating the diameter ratio of the arterial vessel and the venous vessel in the region of interest by using the following formula:
Figure BDA0002396000640000121
wherein AVR (ArterioleVenule ratio) represents the ratio of the diameters of arterial and venous vessels in the region of interest,
Figure BDA0002396000640000122
Aiand AjRespectively representing the maximum artery blood vessel diameter and the minimum artery blood vessel diameter of the acquired region of interest, 0.88 is a fixed coefficient,
Figure BDA0002396000640000123
Viand VjThe obtained maximum vein vessel diameter and the minimum vein vessel diameter of the region of interest are respectively represented, 0.95 is a fixed coefficient, and disease prediction and evaluation can be carried out based on the calculated AVR value, so that the method has certain clinical guiding significance.
It can be seen that, in the embodiments of the present application, feature extraction is performed on an original fundus image to obtain target feature maps of multiple scales, optic disc segmentation is performed based on the obtained target feature maps to obtain optic disc segmentation results, a pre-trained cascade segmentation network model is used to segment the original fundus image to obtain arteriovenous blood vessel identification results, region-of-interest positioning is performed based on the optic disc segmentation results, a blood vessel center line is extracted according to the arteriovenous blood vessel identification results, a neighborhood connectivity determination method is used to detect key points in the blood vessel center line, the key points are removed to obtain multiple mutually independent blood vessel segments, arteriovenous category information on each blood vessel segment is corrected, based on the extracted blood vessel center line, a boundary detection method is used to obtain blood vessel diameters of each blood vessel segment after category information correction, a ratio of diameters of arterial blood vessels and venous blood vessels in the region-of interest is calculated according to the obtained blood vessel diameters, therefore, the identification precision of the fundus retina arteriovenous blood vessels is improved, and the quantification precision is further improved.
Based on the above description, please refer to fig. 12, fig. 12 is a schematic structural diagram of a retinal vascular identification and quantification apparatus according to an embodiment of the present application, and as shown in fig. 12, the apparatus includes:
the feature extraction module 1201 is used for inputting an original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target feature maps of multiple scales;
a optic disc segmentation module 1202, configured to perform optic disc segmentation based on the target feature map, so as to obtain an optic disc segmentation result;
a blood vessel identification module 1203, configured to segment the original fundus image by using a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel identification result;
a region positioning module 1204, configured to perform region-of-interest positioning based on the optic disc segmentation result, to obtain a region-of-interest positioning result;
a center line extraction module 1205, configured to extract a blood vessel center line according to an arteriovenous blood vessel identification result, detect a key point in the blood vessel center line by using a neighborhood connectivity determination method, remove the key point to obtain a plurality of blood vessel segments that are independent of each other, and correct arteriovenous category information on each blood vessel segment to obtain each blood vessel segment after arteriovenous category information is corrected;
the diameter ratio calculation module 1206 is configured to obtain the blood vessel diameters of the blood vessel sections corrected by the arteriovenous classification information by using a boundary detection method based on the blood vessel center lines of the blood vessel sections corrected by the arteriovenous classification information, and calculate the diameter ratio of the arterial blood vessel and the venous blood vessel in the region of interest according to the blood vessel diameters.
Optionally, the feature extraction module 1201 is specifically configured to, in the aspect that the original fundus image is input into a pre-trained U-shaped convolutional neural network model for processing to obtain target feature maps of multiple scales:
inputting the original fundus image into an encoder part of the U-shaped convolution neural network model to perform key feature extraction to obtain a high-dimensional feature map;
and inputting the high-dimensional characteristic diagram into a decoder part of the U-shaped convolutional neural network model for up-sampling operation, and outputting target characteristic diagrams of multiple scales.
Optionally, the feature extraction module 1201 performs key feature extraction on the encoder portion that inputs the original fundus image into the U-shaped convolutional neural network model to obtain a high-dimensional feature map, and is specifically configured to:
performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
performing maximum pooling operation on the feature map obtained through convolution processing, reducing the size of the feature map layer by layer, and performing alternate processing on a plurality of convolution layers and pooling layers to obtain the high-dimensional feature map;
optionally, the feature extraction module 1201 is specifically configured to, in the aspect that the high-dimensional feature map is input to the decoder part of the U-shaped convolutional neural network model to perform upsampling operation, and a target feature map of multiple scales is output:
carrying out up-sampling operation on the high-dimensional feature map, and amplifying the size of the high-dimensional feature map layer by layer;
combining the low-dimensional features extracted from each network layer in the encoding stage with the high-dimensional features symmetrically extracted in the decoding stage through a jump connection layer to obtain an initial feature map of each network layer, wherein the initial feature map of each network layer is different in scale;
and outputting the initial characteristic diagram of each network layer through the output branch of each network layer to obtain a plurality of scales of target characteristic diagrams, wherein an attention mechanism is added into the output branch of each network layer.
Optionally, the optic disc segmentation module 1202 is specifically configured to, in the aspect of performing optic disc segmentation based on the target feature map to obtain an optic disc segmentation result:
fusing the target characteristic graph to obtain an image to be segmented;
performing candidate frame regression processing on the image to be segmented to position the optic disc position in the image to be segmented and output the boundary frame information of the optic disc;
and cutting out the calibrated image blocks of the video disc area according to the information of the boundary frame of the video disc, inputting the image blocks into a pre-trained U-shaped segmentation network, and outputting the video disc segmentation result through feature extraction and up-sampling operation.
Optionally, the blood vessel recognition module 1203 is specifically configured to, in the aspect of obtaining an arteriovenous blood vessel recognition result by segmenting the original fundus image by using the pre-trained cascade segmentation network model:
extracting a green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a contrast-enhanced green channel image;
cutting the contrast-enhanced green channel image into a plurality of fundus image blocks;
and inputting the fundus image blocks into a preset cascade segmentation network model for segmentation to obtain an arteriovenous blood vessel identification result.
Optionally, the diameter ratio calculating module 1206 is specifically configured to, in the aspect of obtaining the blood vessel diameter of each blood vessel segment after the arteriovenous classification information is corrected by using the boundary detection method:
traversing in a rectangular area with the pixel range of 40 × 40 by taking the center point of the blood vessel as the center of a circle, searching a boundary point which is closest to the center line of each blood vessel section after the arteriovenous classification information correction, and taking the distance between the boundary point which is closest to the center line of the blood vessel section and the center point of the blood vessel as the radius r to obtain the blood vessel diameter 2r of each blood vessel section after the arteriovenous classification information correction.
Optionally, the diameter ratio calculating module 1206 calculates the ratio of the diameters of the arterial vessel and the venous vessel in the region of interest by using the following formula:
Figure BDA0002396000640000151
wherein AVR represents the diameter ratio of the artery blood vessel and the vein blood vessel in the region of interest, CRAE represents the diameter equivalent value of the central artery blood vessel of the retina,
Figure BDA0002396000640000152
Aiand AjRespectively representing the acquired maximum arterial vessel diameter and minimum arterial vessel diameter of the region of interest, and 0.88 is a fixed coefficient; CRVE represents the retinal central venous vessel diameter equivalent,
Figure BDA0002396000640000153
Viand VjRespectively representing the maximum vein vessel diameter and the minimum vein vessel diameter of the acquired region of interest, and 0.95 is a fixed coefficient.
It should be noted that, each step in the fundus retinal blood vessel identification and quantification method shown in fig. 2 may be executed by each unit module in the fundus retinal blood vessel identification and quantification apparatus provided in the embodiment of the present application, and may achieve the same or similar beneficial effects.
Based on the above description, please refer to fig. 13, fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 13, the electronic device includes: a memory 1301 for storing one or more computer programs; a processor 1302 for calling a computer program stored in the memory 1301 to execute the steps in the above fundus retinal blood vessel identification and quantification method embodiment; a communication interface 1303 for performing input and output, where the communication interface 1303 may be one or more; it will be appreciated that the various parts of the electronic device communicate via respective bus connections. The processor 1302 is specifically configured to invoke a computer program to execute the following steps:
inputting an original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic graphs of multiple scales;
performing optic disc segmentation based on the target feature map to obtain optic disc segmentation results;
segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel identification result;
positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result;
extracting a blood vessel center line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel center line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected;
and acquiring the vessel diameter of each vessel section corrected by the arteriovenous category information by adopting a boundary detection method based on the vessel center line of each vessel section corrected by the arteriovenous category information, and calculating the diameter ratio of the arterial vessel and the venous vessel in the region of interest according to the vessel diameter.
Optionally, the processor 1302 executes the process of inputting the original fundus image into a pre-trained U-shaped convolutional neural network model to obtain a target feature map with multiple scales, including:
inputting the original fundus image into an encoder part of the U-shaped convolution neural network model to perform key feature extraction to obtain a high-dimensional feature map;
and inputting the high-dimensional characteristic diagram into a decoder part of the U-shaped convolutional neural network model for up-sampling operation, and outputting target characteristic diagrams of multiple scales.
Optionally, the processor 1302 executes the step of inputting the original fundus image into the encoder portion of the U-shaped convolutional neural network model to perform key feature extraction, so as to obtain a high-dimensional feature map, including:
performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
performing maximum pooling operation on the feature map obtained through convolution processing, reducing the size of the feature map layer by layer, and performing alternate processing on a plurality of convolution layers and pooling layers to obtain the high-dimensional feature map;
optionally, the processor 1302 executes the up-sampling operation of inputting the high-dimensional feature map into the decoder portion of the U-shaped convolutional neural network model, and outputs a target feature map with multiple scales, including:
carrying out up-sampling operation on the high-dimensional characteristic diagram, and amplifying the size of the high-dimensional characteristic diagram layer by layer;
combining the low-dimensional features extracted from each network layer in the encoding stage with the high-dimensional features symmetrically extracted in the decoding stage through a jump connection layer to obtain an initial feature map of each network layer, wherein the initial feature map of each network layer is different in scale;
and outputting the initial characteristic diagram of each network layer through the output branch of each network layer to obtain a plurality of scales of target characteristic diagrams, wherein an attention mechanism is added into the output branch of each network layer.
Optionally, the processor 1302 executes the optical disc segmentation based on the target feature map to obtain an optical disc segmentation result, including:
fusing the target characteristic graph to obtain an image to be segmented;
performing candidate frame regression processing on the image to be segmented to position the optic disc position in the image to be segmented and output the boundary frame information of the optic disc;
and cutting out the calibrated image blocks of the video disc area according to the information of the boundary frame of the video disc, inputting the image blocks into a pre-trained U-shaped segmentation network, and outputting the video disc segmentation result through feature extraction and up-sampling operation.
Optionally, the processor 1302 executes the pre-trained cascade segmentation network model to segment the original fundus image, and obtains an arteriovenous blood vessel recognition result, including:
extracting a green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a contrast-enhanced green channel image;
cutting the contrast-enhanced green channel image into a plurality of fundus image blocks;
and inputting the fundus image blocks into a preset cascade segmentation network model for segmentation to obtain an arteriovenous blood vessel identification result.
Optionally, the processor 1302 executes the method of acquiring the blood vessel diameter of each blood vessel segment after the arteriovenous classification information is corrected by using the boundary detection, including:
traversing in a rectangular area with the pixel range of 40 × 40 by taking the center point of the blood vessel as the center of a circle, searching a boundary point which is closest to the center line of each blood vessel section after the arteriovenous classification information correction, and taking the distance between the boundary point which is closest to the center line of the blood vessel section and the center point of the blood vessel as the radius r to obtain the blood vessel diameter 2r of each blood vessel section after the arteriovenous classification information correction.
Optionally, processor 1302 calculates a ratio of diameters of arterial and venous vessels in the region of interest using the following equation:
Figure BDA0002396000640000171
wherein AVR represents the diameter ratio of the artery blood vessel and the vein blood vessel in the region of interest, CRAE represents the diameter equivalent value of the central artery blood vessel of the retina,
Figure BDA0002396000640000181
Aiand AjRespectively representing the acquired maximum arterial vessel diameter and minimum arterial vessel diameter of the region of interest, and 0.88 is a fixed coefficient; CRVE represents the retinal central venous vessel diameter equivalent,
Figure BDA0002396000640000182
Viand VjRespectively representing the maximum vein vessel diameter and the minimum vein vessel diameter of the acquired region of interest, and 0.95 is a fixed coefficient.
Illustratively, the electronic device may be a computer, a notebook computer, a tablet computer, a palm computer, a server, or the like. Electronic devices may include, but are not limited to, memory 1301, processor 1302, communication interface 1303. It will be appreciated by those skilled in the art that the schematic diagrams are merely examples of an electronic device and are not limiting of an electronic device and may include more or fewer components than those shown, or some components in combination, or different components.
It should be noted that, since the processor 1302 of the electronic device executes the computer program to implement the steps in the fundus retinal blood vessel identification and quantification method, the embodiments of the fundus retinal blood vessel identification and quantification method are all applicable to the electronic device, and all can achieve the same or similar beneficial effects.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the steps in the fundus retinal blood vessel identification and quantification method.
Illustratively, the computer program of the computer-readable storage medium comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, since the computer program of the computer readable storage medium is executed by the processor to implement the steps in the above fundus retinal blood vessel identification and quantification method, all the examples of the above fundus retinal blood vessel identification and quantification method are applicable to the computer readable storage medium, and can achieve the same or similar beneficial effects.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method for identifying and quantifying retinal blood vessels in a eye, the method comprising:
inputting an original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic graphs of multiple scales;
performing optic disc segmentation based on the target feature map to obtain optic disc segmentation results;
segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel identification result;
positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result;
extracting a blood vessel center line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel center line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected;
and acquiring the vessel diameter of each vessel section corrected by the arteriovenous category information by adopting a boundary detection method based on the vessel center line of each vessel section corrected by the arteriovenous category information, and calculating the diameter ratio of the arterial vessel and the venous vessel in the region of interest according to the vessel diameter.
2. The method as claimed in claim 1, wherein the inputting the original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target feature maps of multiple scales comprises:
inputting the original fundus image into an encoder part of the U-shaped convolution neural network model to perform key feature extraction to obtain a high-dimensional feature map;
and inputting the high-dimensional characteristic diagram into a decoder part of the U-shaped convolutional neural network model for up-sampling operation, and outputting target characteristic diagrams of multiple scales.
3. The method as claimed in claim 2, wherein the inputting the original fundus image into the encoder portion of the U-shaped convolutional neural network model for key feature extraction to obtain a high-dimensional feature map comprises:
performing convolution processing on the original fundus image to extract key features to obtain a feature map with the same size as the original fundus image;
performing maximum pooling operation on the feature map obtained through convolution processing, reducing the size of the feature map layer by layer, and performing alternate processing on a plurality of convolution layers and pooling layers to obtain the high-dimensional feature map;
the step of inputting the high-dimensional feature map into a decoder part of the U-shaped convolutional neural network model for up-sampling operation and outputting target feature maps of multiple scales comprises the following steps:
carrying out up-sampling operation on the high-dimensional feature map, and amplifying the size of the high-dimensional feature map layer by layer;
combining the low-dimensional features extracted from each network layer in the encoding stage with the high-dimensional features symmetrically extracted in the decoding stage through a jump connection layer to obtain an initial feature map of each network layer, wherein the initial feature map of each network layer is different in scale;
and outputting the initial characteristic diagram of each network layer through the output branch of each network layer to obtain a plurality of scales of target characteristic diagrams, wherein an attention mechanism is added into the output branch of each network layer.
4. The method according to any one of claims 1 to 3, wherein the performing optic disc segmentation based on the target feature map to obtain optic disc segmentation results comprises:
fusing the target characteristic graph to obtain an image to be segmented;
performing candidate frame regression processing on the image to be segmented to position the optic disc position in the image to be segmented and output the boundary frame information of the optic disc;
and cutting out the calibrated image blocks of the video disc area according to the information of the boundary frame of the video disc, inputting the image blocks into a pre-trained U-shaped segmentation network, and outputting the video disc segmentation result through feature extraction and up-sampling operation.
5. The method according to any one of claims 1 to 3, wherein the segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel recognition result comprises:
extracting a green channel image of the original fundus image, and performing histogram equalization processing on the green channel image to obtain a contrast-enhanced green channel image;
cutting the contrast-enhanced green channel image into a plurality of fundus image blocks;
and inputting the fundus image blocks into a preset cascade segmentation network model for segmentation to obtain an arteriovenous blood vessel identification result.
6. The method according to any one of claims 1 to 3, wherein the obtaining of the vessel diameter of each vessel segment after the arteriovenous classification information correction by using the boundary detection method comprises:
traversing in a rectangular area with the pixel range of 40 × 40 by taking the center point of the blood vessel as the center of a circle, searching a boundary point which is closest to the center line of each blood vessel section after the arteriovenous classification information correction, and taking the distance between the boundary point which is closest to the center line of the blood vessel section and the center point of the blood vessel as the radius r to obtain the blood vessel diameter 2r of each blood vessel section after the arteriovenous classification information correction.
7. The method of claim 1, wherein the ratio of the diameters of the arterial and venous vessels in the region of interest is calculated using the following equation:
Figure FDA0002396000630000031
wherein AVR represents the diameter ratio of the artery blood vessel and the vein blood vessel in the region of interest, CRAE represents the diameter equivalent value of the central artery blood vessel of the retina,
Figure FDA0002396000630000032
Aiand AjRespectively representing the acquired maximum arterial vessel diameter and minimum arterial vessel diameter of the region of interest, and 0.88 is a fixed coefficient; CRVE represents the retinal central venous vessel diameter equivalent,
Figure FDA0002396000630000033
Figure FDA0002396000630000034
Viand VjRespectively representing the maximum vein vessel diameter and the minimum vein vessel diameter of the acquired region of interest, and 0.95 is a fixed coefficient.
8. A device for identifying and quantifying retinal blood vessels in a eye, the device comprising:
the characteristic extraction module is used for inputting the original fundus image into a pre-trained U-shaped convolution neural network model for processing to obtain target characteristic graphs of multiple scales;
the optic disc segmentation module is used for carrying out optic disc segmentation based on the target characteristic graph to obtain an optic disc segmentation result;
the blood vessel recognition module is used for segmenting the original fundus image by adopting a pre-trained cascade segmentation network model to obtain an arteriovenous blood vessel recognition result;
the region positioning module is used for positioning the region of interest based on the optic disc segmentation result to obtain a region of interest positioning result;
the central line extraction module is used for extracting a blood vessel central line according to an arteriovenous blood vessel identification result, detecting key points in the blood vessel central line by adopting a neighborhood connectivity judgment method, removing the key points to obtain a plurality of mutually independent blood vessel sections, and correcting arteriovenous category information on each blood vessel section to obtain each blood vessel section after arteriovenous category information is corrected;
and the diameter ratio calculation module is used for acquiring the blood vessel diameter of each blood vessel section corrected by the arteriovenous category information by adopting a boundary detection method based on the blood vessel center line of each blood vessel section corrected by the arteriovenous category information, and calculating the diameter ratio of the arterial blood vessel and the venous blood vessel in the region of interest according to the blood vessel diameter.
9. An electronic device, characterized in that the electronic device comprises a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the fundus retinal blood vessel identification and quantification method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which when executed by a processor implements the steps in the fundus retinal blood vessel identification and quantification method according to any one of claims 1 to 7.
CN202010134390.7A 2020-02-29 2020-02-29 Fundus retina blood vessel identification and quantification method, device, equipment and storage medium Active CN111340789B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010134390.7A CN111340789B (en) 2020-02-29 2020-02-29 Fundus retina blood vessel identification and quantification method, device, equipment and storage medium
PCT/CN2020/099538 WO2021169128A1 (en) 2020-02-29 2020-06-30 Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010134390.7A CN111340789B (en) 2020-02-29 2020-02-29 Fundus retina blood vessel identification and quantification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111340789A true CN111340789A (en) 2020-06-26
CN111340789B CN111340789B (en) 2024-10-18

Family

ID=71184092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010134390.7A Active CN111340789B (en) 2020-02-29 2020-02-29 Fundus retina blood vessel identification and quantification method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111340789B (en)
WO (1) WO2021169128A1 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815599A (en) * 2020-07-01 2020-10-23 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN111932535A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN111932554A (en) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 Pulmonary blood vessel segmentation method, device and storage medium
CN112330684A (en) * 2020-11-23 2021-02-05 腾讯科技(深圳)有限公司 Object segmentation method and device, computer equipment and storage medium
CN112419338A (en) * 2020-12-08 2021-02-26 深圳大学 A segmentation method for head and neck organs at risk based on anatomical prior knowledge
CN112446866A (en) * 2020-11-25 2021-03-05 上海联影医疗科技股份有限公司 Blood flow parameter calculation method, device, equipment and storage medium
CN112465772A (en) * 2020-11-25 2021-03-09 平安科技(深圳)有限公司 Fundus color photograph image blood vessel evaluation method, device, computer equipment and medium
CN112529839A (en) * 2020-11-05 2021-03-19 西安交通大学 Method and system for extracting carotid artery blood vessel center line in nuclear magnetic resonance image
CN112734828A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
CN112734784A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 High-precision fundus blood vessel boundary determining method, device, medium and equipment
CN112826442A (en) * 2020-12-31 2021-05-25 上海鹰瞳医疗科技有限公司 Method and device for mental state recognition based on fundus images
CN112862787A (en) * 2021-02-10 2021-05-28 昆明同心医联科技有限公司 CTA image data processing method, device and storage medium
CN113012114A (en) * 2021-03-02 2021-06-22 推想医疗科技股份有限公司 Blood vessel identification method and device, storage medium and electronic equipment
CN113192074A (en) * 2021-04-07 2021-07-30 西安交通大学 Artery and vein automatic segmentation method suitable for OCTA image
CN113269737A (en) * 2021-05-17 2021-08-17 西安交通大学 Method and system for calculating diameter of artery and vein of retina
WO2021169128A1 (en) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium
CN113344893A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 High-precision fundus arteriovenous identification method, device, medium and equipment
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113538463A (en) * 2021-07-22 2021-10-22 强联智创(北京)科技有限公司 Aneurysm segmentation method, device and equipment
CN113643354A (en) * 2020-09-04 2021-11-12 深圳硅基智能科技有限公司 Device for measuring blood vessel diameter based on fundus image with enhanced resolution
CN113689954A (en) * 2021-08-24 2021-11-23 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium
CN113724186A (en) * 2021-03-10 2021-11-30 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium
CN113749690A (en) * 2021-09-24 2021-12-07 无锡祥生医疗科技股份有限公司 Blood flow measuring method and device for blood vessel and storage medium
CN113792740A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Arteriovenous segmentation method, system, equipment and medium for fundus color photography
CN113951813A (en) * 2021-11-09 2022-01-21 北京工业大学 Retinal blood vessel branch angle calculation method and device and electronic equipment
CN114037663A (en) * 2021-10-27 2022-02-11 北京医准智能科技有限公司 Blood vessel segmentation method, device and computer readable medium
CN114359284A (en) * 2022-03-18 2022-04-15 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
CN114359280A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium
CN114387219A (en) * 2021-12-17 2022-04-22 依未科技(北京)有限公司 Method, device, medium and equipment for detecting characteristics of fundus arteriovenous crossing compression
CN114387210A (en) * 2021-12-03 2022-04-22 依未科技(北京)有限公司 Method, apparatus, medium, and device for fundus feature acquisition
CN114627077A (en) * 2022-03-15 2022-06-14 平安科技(深圳)有限公司 Image segmentation method and device, electronic equipment and storage medium
WO2022142030A1 (en) * 2020-12-28 2022-07-07 深圳硅基智能科技有限公司 Method and system for measuring lesion features of hypertensive retinopathy
CN115760873A (en) * 2022-11-08 2023-03-07 温州谱希医学检验实验室有限公司 Method for calculating diameter of retinal vessel by fundus oculi illumination based on region segmentation
CN116843612A (en) * 2023-04-20 2023-10-03 西南医科大学附属医院 Image processing method for diabetic retinopathy diagnosis
CN117351009A (en) * 2023-12-04 2024-01-05 江苏富翰医疗产业发展有限公司 Method and system for generating blood oxygen saturation data based on multispectral fundus image
CN118429319A (en) * 2024-05-20 2024-08-02 爱尔眼科医院集团股份有限公司长沙爱尔眼科医院 Retinal vascular imaging segmentation method, device, equipment and medium

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989643B (en) * 2021-10-26 2023-09-01 萱闱(北京)生物科技有限公司 Pipeline state detection method, device, medium and computing equipment
CN114418987B (en) * 2022-01-17 2024-05-28 北京工业大学 Retina blood vessel segmentation method and system with multi-stage feature fusion
WO2023186133A1 (en) * 2022-04-02 2023-10-05 武汉联影智融医疗科技有限公司 System and method for puncture path planning
CN114820473B (en) * 2022-04-10 2024-10-22 复旦大学 Medical image segmentation method based on lesion area perception and uncertainty guidance
CN114926892A (en) * 2022-06-14 2022-08-19 中国人民大学 A method, system and readable medium for fundus image matching based on deep learning
AU2023294396A1 (en) * 2022-06-16 2024-12-19 Eyetelligence Pty Ltd Fundus image analysis system
CN115294126B (en) * 2022-10-08 2022-12-16 南京诺源医疗器械有限公司 Cancer cell intelligent identification method for pathological image
CN115690124B (en) * 2022-11-02 2023-05-12 中国科学院苏州生物医学工程技术研究所 High-precision single-frame fundus fluorescence contrast image leakage area segmentation method and system
CN116206114B (en) * 2023-04-28 2023-08-01 成都云栈科技有限公司 Portrait extraction method and device under complex background
CN116309585B (en) * 2023-05-22 2023-08-22 山东大学 Method and system for target area recognition in breast ultrasound images based on multi-task learning
CN116473673B (en) * 2023-06-20 2024-02-27 浙江华诺康科技有限公司 Path planning method, device, system and storage medium for endoscope
CN116824116B (en) * 2023-06-26 2024-07-26 爱尔眼科医院集团股份有限公司 Ultra-wide-angle fundus image recognition method, device, equipment and storage medium
CN116524548B (en) * 2023-07-03 2023-12-26 中国科学院自动化研究所 Vascular structure information extraction method, device and storage medium
CN117038088B (en) * 2023-10-09 2024-02-02 北京鹰瞳科技发展股份有限公司 Method, device, equipment and medium for determining onset of diabetic retinopathy
CN117746146B (en) * 2023-12-22 2024-06-14 博奥生物集团有限公司 Method and device for determining blood vessel color in the white eye region
CN117934416B (en) * 2024-01-23 2024-10-15 深圳市铱硙医疗科技有限公司 CTA internal carotid artery segmentation method and system based on machine learning
CN118297974B (en) * 2024-06-06 2024-08-13 柏意慧心(杭州)网络科技有限公司 Blood vessel interlayer cavity separation method and device, storage medium and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320313A1 (en) * 2014-05-08 2015-11-12 Universita Della Calabria Portable medical device and method for quantitative retinal image analysis through a smartphone
WO2019237148A1 (en) * 2018-06-13 2019-12-19 Commonwealth Scientific And Industrial Research Organisation Retinal image analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657612A (en) * 2017-10-16 2018-02-02 西安交通大学 Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment
CN109166124B (en) * 2018-11-20 2021-12-14 中南大学 A Quantitative Method for Retinal Vascular Morphology Based on Connected Regions
CN111340789B (en) * 2020-02-29 2024-10-18 平安科技(深圳)有限公司 Fundus retina blood vessel identification and quantification method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150320313A1 (en) * 2014-05-08 2015-11-12 Universita Della Calabria Portable medical device and method for quantitative retinal image analysis through a smartphone
WO2019237148A1 (en) * 2018-06-13 2019-12-19 Commonwealth Scientific And Industrial Research Organisation Retinal image analysis

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021169128A1 (en) * 2020-02-29 2021-09-02 平安科技(深圳)有限公司 Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium
CN111815599A (en) * 2020-07-01 2020-10-23 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN111815599B (en) * 2020-07-01 2023-12-15 上海联影智能医疗科技有限公司 Image processing method, device, equipment and storage medium
CN111932554A (en) * 2020-07-31 2020-11-13 青岛海信医疗设备股份有限公司 Pulmonary blood vessel segmentation method, device and storage medium
CN111932554B (en) * 2020-07-31 2024-03-22 青岛海信医疗设备股份有限公司 Lung vessel segmentation method, equipment and storage medium
CN113643353A (en) * 2020-09-04 2021-11-12 深圳硅基智能科技有限公司 Method for measuring enhanced resolution of blood vessel diameter of fundus image
CN113643354A (en) * 2020-09-04 2021-11-12 深圳硅基智能科技有限公司 Device for measuring blood vessel diameter based on fundus image with enhanced resolution
CN113643354B (en) * 2020-09-04 2023-10-13 深圳硅基智能科技有限公司 Measuring device of vascular caliber based on fundus image with enhanced resolution
CN113643353B (en) * 2020-09-04 2024-02-06 深圳硅基智能科技有限公司 Measurement method for enhancing resolution of vascular caliber of fundus image
CN111932535A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image
CN112529839A (en) * 2020-11-05 2021-03-19 西安交通大学 Method and system for extracting carotid artery blood vessel center line in nuclear magnetic resonance image
CN112529839B (en) * 2020-11-05 2023-05-02 西安交通大学 Method and system for extracting carotid vessel centerline in nuclear magnetic resonance image
CN112330684A (en) * 2020-11-23 2021-02-05 腾讯科技(深圳)有限公司 Object segmentation method and device, computer equipment and storage medium
CN112330684B (en) * 2020-11-23 2022-09-13 腾讯科技(深圳)有限公司 Object segmentation method and device, computer equipment and storage medium
CN112465772B (en) * 2020-11-25 2023-09-26 平安科技(深圳)有限公司 Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
CN112465772A (en) * 2020-11-25 2021-03-09 平安科技(深圳)有限公司 Fundus color photograph image blood vessel evaluation method, device, computer equipment and medium
CN112446866A (en) * 2020-11-25 2021-03-05 上海联影医疗科技股份有限公司 Blood flow parameter calculation method, device, equipment and storage medium
CN112419338A (en) * 2020-12-08 2021-02-26 深圳大学 A segmentation method for head and neck organs at risk based on anatomical prior knowledge
WO2022142030A1 (en) * 2020-12-28 2022-07-07 深圳硅基智能科技有限公司 Method and system for measuring lesion features of hypertensive retinopathy
CN112826442A (en) * 2020-12-31 2021-05-25 上海鹰瞳医疗科技有限公司 Method and device for mental state recognition based on fundus images
CN112734828A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
CN112734784A (en) * 2021-01-28 2021-04-30 依未科技(北京)有限公司 High-precision fundus blood vessel boundary determining method, device, medium and equipment
CN112734828B (en) * 2021-01-28 2023-02-24 依未科技(北京)有限公司 Method, device, medium and equipment for determining center line of fundus blood vessel
CN112862787A (en) * 2021-02-10 2021-05-28 昆明同心医联科技有限公司 CTA image data processing method, device and storage medium
CN112862787B (en) * 2021-02-10 2022-11-15 昆明同心医联科技有限公司 CTA image data processing method, device and storage medium
CN113012114B (en) * 2021-03-02 2021-12-03 推想医疗科技股份有限公司 Blood vessel identification method and device, storage medium and electronic equipment
CN113012114A (en) * 2021-03-02 2021-06-22 推想医疗科技股份有限公司 Blood vessel identification method and device, storage medium and electronic equipment
CN113724186A (en) * 2021-03-10 2021-11-30 腾讯科技(深圳)有限公司 Data processing method, device, equipment and medium
CN113192074A (en) * 2021-04-07 2021-07-30 西安交通大学 Artery and vein automatic segmentation method suitable for OCTA image
CN113192074B (en) * 2021-04-07 2024-04-05 西安交通大学 Automatic arteriovenous segmentation method suitable for OCTA image
CN113269737B (en) * 2021-05-17 2024-03-19 北京鹰瞳科技发展股份有限公司 Fundus retina artery and vein vessel diameter calculation method and system
CN113269737A (en) * 2021-05-17 2021-08-17 西安交通大学 Method and system for calculating diameter of artery and vein of retina
CN113344893A (en) * 2021-06-23 2021-09-03 依未科技(北京)有限公司 High-precision fundus arteriovenous identification method, device, medium and equipment
CN113425248B (en) * 2021-06-24 2024-03-08 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113538463A (en) * 2021-07-22 2021-10-22 强联智创(北京)科技有限公司 Aneurysm segmentation method, device and equipment
CN113689954B (en) * 2021-08-24 2024-10-18 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium
CN113689954A (en) * 2021-08-24 2021-11-23 平安科技(深圳)有限公司 Hypertension risk prediction method, device, equipment and medium
CN113792740B (en) * 2021-09-16 2023-12-26 平安创科科技(北京)有限公司 Artery and vein segmentation method, system, equipment and medium for fundus color illumination
CN113792740A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Arteriovenous segmentation method, system, equipment and medium for fundus color photography
CN113749690A (en) * 2021-09-24 2021-12-07 无锡祥生医疗科技股份有限公司 Blood flow measuring method and device for blood vessel and storage medium
CN113749690B (en) * 2021-09-24 2024-01-30 无锡祥生医疗科技股份有限公司 Blood vessel blood flow measuring method, device and storage medium
CN114037663A (en) * 2021-10-27 2022-02-11 北京医准智能科技有限公司 Blood vessel segmentation method, device and computer readable medium
CN113951813A (en) * 2021-11-09 2022-01-21 北京工业大学 Retinal blood vessel branch angle calculation method and device and electronic equipment
CN114387210A (en) * 2021-12-03 2022-04-22 依未科技(北京)有限公司 Method, apparatus, medium, and device for fundus feature acquisition
CN114387219A (en) * 2021-12-17 2022-04-22 依未科技(北京)有限公司 Method, device, medium and equipment for detecting characteristics of fundus arteriovenous crossing compression
CN114627077A (en) * 2022-03-15 2022-06-14 平安科技(深圳)有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114359280A (en) * 2022-03-18 2022-04-15 武汉楚精灵医疗科技有限公司 Gastric mucosa image boundary quantification method, device, terminal and storage medium
CN114359284A (en) * 2022-03-18 2022-04-15 北京鹰瞳科技发展股份有限公司 Method for analyzing retinal fundus images and related products
CN115760873A (en) * 2022-11-08 2023-03-07 温州谱希医学检验实验室有限公司 Method for calculating diameter of retinal vessel by fundus oculi illumination based on region segmentation
CN116843612A (en) * 2023-04-20 2023-10-03 西南医科大学附属医院 Image processing method for diabetic retinopathy diagnosis
CN117351009A (en) * 2023-12-04 2024-01-05 江苏富翰医疗产业发展有限公司 Method and system for generating blood oxygen saturation data based on multispectral fundus image
CN117351009B (en) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 Method and system for generating blood oxygen saturation data based on multispectral fundus image
CN118429319A (en) * 2024-05-20 2024-08-02 爱尔眼科医院集团股份有限公司长沙爱尔眼科医院 Retinal vascular imaging segmentation method, device, equipment and medium

Also Published As

Publication number Publication date
CN111340789B (en) 2024-10-18
WO2021169128A1 (en) 2021-09-02

Similar Documents

Publication Publication Date Title
CN111340789B (en) Fundus retina blood vessel identification and quantification method, device, equipment and storage medium
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
WO2021003821A1 (en) Cell detection method and apparatus for a glomerular pathological section image, and device
Hassan et al. Joint segmentation and quantification of chorioretinal biomarkers in optical coherence tomography scans: A deep learning approach
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
Chetoui et al. Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets
Liu et al. A framework of wound segmentation based on deep convolutional networks
CN112465772B (en) Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
Zhao et al. Saliency driven vasculature segmentation with infinite perimeter active contour model
Vij et al. A systematic review on diabetic retinopathy detection using deep learning techniques
CN111222361A (en) Method and system for analyzing hypertension retina vascular change characteristic data
CN111882566B (en) Blood vessel segmentation method, device, equipment and storage medium for retina image
Cavalcanti et al. Macroscopic pigmented skin lesion segmentation and its influence on lesion classification and diagnosis
CN108961334B (en) A method for measuring retinal vessel wall thickness based on image registration
CN111789572A (en) Determining hypertension levels from retinal vasculature images
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
Jeena et al. Stroke diagnosis from retinal fundus images using multi texture analysis
Ding et al. Multi-scale morphological analysis for retinal vessel detection in wide-field fluorescein angiography
Morales et al. Segmentation and analysis of retinal vascular tree from fundus images processing
CN110874597A (en) Blood vessel feature extraction method, device and system for fundus image and storage medium
Pranav et al. Comparative study of skin lesion classification using dermoscopic images
CN114757944B (en) Blood vessel image analysis method and device and storage medium
CN115100178A (en) Method, device, medium and equipment for evaluating morphological characteristics of fundus blood vessels
CN112949585B (en) Method, device, electronic device and storage medium for identifying blood vessels in fundus images
CN116250801A (en) Blood oxygen saturation measuring method and system based on eye images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40023085

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant