CN110889859A - U-shaped network for fundus image blood vessel segmentation - Google Patents
U-shaped network for fundus image blood vessel segmentation Download PDFInfo
- Publication number
- CN110889859A CN110889859A CN201911095957.8A CN201911095957A CN110889859A CN 110889859 A CN110889859 A CN 110889859A CN 201911095957 A CN201911095957 A CN 201911095957A CN 110889859 A CN110889859 A CN 110889859A
- Authority
- CN
- China
- Prior art keywords
- feature
- module
- convolution
- unit
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 48
- 210000004204 blood vessel Anatomy 0.000 title abstract description 30
- 238000005070 sampling Methods 0.000 claims abstract description 73
- 238000012549 training Methods 0.000 claims description 18
- 230000004913 activation Effects 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 5
- 230000003213 activating effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000000034 method Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000002207 retinal effect Effects 0.000 description 3
- 206010020772 Hypertension Diseases 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 208000029078 coronary artery disease Diseases 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004256 retinal image Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 206010012667 Diabetic glaucoma Diseases 0.000 description 1
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000002490 cerebral effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 210000001328 optic nerve Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The embodiment of the invention relates to a U-shaped network for fundus image blood vessel segmentation. Wherein the U-type network includes: the characteristic encoder is connected with the characteristic decoder and comprises M down-sampling modules connected in series and used for extracting image characteristics of the fundus image; the characteristic decoder comprises N up-sampling modules which are connected in series, and the N up-sampling modules are respectively connected with the N down-sampling modules and used for outputting a blood vessel segmentation result of the fundus image. The invention solves the technical problem that the blood vessel characteristics of the fundus image cannot be completely described due to low blood vessel segmentation precision of the fundus image in the related technology.
Description
Technical Field
The invention relates to the field of fundus image segmentation, in particular to a U-shaped network for fundus image blood vessel segmentation.
Background
Retinal fundus image analysis is important for ophthalmologists to diagnose fundus diseases such as diabetic retinopathy, glaucoma and other diseases related to fundus manifestation, such as hypertension, coronary heart disease and the like. If the treatment is not diagnosed in time, there is a risk of blindness or more serious. The fundus blood vessels are one of the most basic tissue structures in retinal fundus images, because the eye is the only organ in which blood vessels can be observed directly without invading the body. And the retina and optic nerve are directly connected with the brain nerve, and the micro-blood vessel and the cerebral blood vessel of the eye are also directly connected with the heart blood vessel. Therefore, the change of the blood vessel shape of the fundus image can reflect the relevant chronic diseases such as coronary heart disease, hypertension and the like to a certain extent. Has important significance for disease screening.
Automatic segmentation of retinal fundus picture vessels has attracted significant attention in the last decades. Existing segmentation algorithms can be divided into two categories: a conventional segmentation method and a neural network segmentation method.
1. The traditional segmentation method comprises the following steps: the information mainly used for segmenting blood vessels from fundus images using conventional methods is fundus chromatism information. In the early stage, the boundary is determined by using a threshold value usually depending on the color intensity difference between the blood vessel edges, and the boundary is processed before and after by using methods such as image morphology.
2. The neural network segmentation method comprises the following steps: since the conventional method performs a vessel segmentation task without using any label information, the use of the neural network segmentation method shows some advantages compared to the conventional unsupervised method. Can be viewed as a three-classification problem at the pixel level. Each pixel belongs to a background or a blood vessel. Pixel-level segmentation is a very important field in computer vision and can be regarded as a semantic segmentation problem. The deep learning method, which is the most popular in the semantic segmentation task, is image block classification, i.e., each pixel is classified independently by using image blocks around the pixel. The main reason for using image block classification is that the classification network is typically a fully connected layer and requires a fixed size image. In a common method, fundus image vessel segmentation is performed by using a full convolution network.
In the above conventional unsupervised method, the model usually involves more additional conditions to be satisfied, and has a higher requirement on the quality of the image itself, and the accuracy of the segmented blood vessel is also lower. For the full convolution neural network mentioned in the neural network segmentation method, a lot of useful information is lost in a layer-by-layer feature extraction mode, so that the finally learned parameters of the fundus image blood vessel segmentation model cannot completely describe the blood vessel features of the fundus image.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a U-shaped network for fundus image blood vessel segmentation, which at least solves the technical problem that the fundus image blood vessel features cannot be completely described due to low fundus image blood vessel segmentation precision in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a U-type network for fundus image vessel segmentation, the U-type network including: a feature encoder and a feature decoder, wherein: the feature encoder is connected with the feature decoder and comprises M down-sampling modules connected in series and used for extracting image features of the fundus image; the feature decoder comprises N up-sampling modules which are connected in series, and the N up-sampling modules are respectively connected with the N down-sampling modules and used for outputting segmentation results of the fundus image; wherein M and N are positive integers greater than 1, and M is greater than or equal to N.
Further, the feature map size output by the downsampling module is the same as the feature map size input by the upsampling module to which the downsampling module is connected.
Further, the up-sampling module comprises a convolution unit, a batch standardization activation unit and a splicing unit, wherein: the splicing unit is connected with the convolution unit; and the convolution unit is respectively connected with the splicing unit and the batch standardization activation unit and is used for performing convolution on the output result of the splicing unit and inputting the convolution result of the convolution unit to the batch standardization activation unit.
Further, the U-type network further comprises a convolution module, wherein: and the convolution module is respectively connected with the feature encoder and the feature decoder.
Further, still include: the 1 st up-sampling module in the feature decoder comprises a first splicing unit, and the first splicing unit is used for splicing the feature map output by the convolution module and the feature map output by the down-sampling module corresponding to the 1 st up-sampling module; the 2 nd to N th up-sampling modules in the feature decoder comprise second splicing units, and the second splicing units are used for splicing feature graphs output by down-sampling modules corresponding to the up-sampling modules where the second splicing units are located and feature graphs output by adjacent up-sampling modules.
Further, the value of N is 5.
Further, the feature decoder further comprises an output module, and the output module is configured to perform convolution operation on the output value of the nth upsampling module and activate sigmod operation.
Further, before the U-shaped network is trained, a training data set is subjected to preset processing, wherein the preset processing comprises rotation, horizontal overturning and vertical overturning.
Further, in the process of training the U-shaped network, calculating the cross entropy loss between the probability graph output by the U-shaped network and the real label of the training data set, and optimizing through a back propagation algorithm.
In the embodiment of the invention, a characteristic encoder and a characteristic decoder are combined and connected with the characteristic decoder through the characteristic encoder, and M down-sampling modules connected in series in the characteristic encoder are used for extracting image characteristics of a fundus image; the N up-sampling modules in the feature decoder are respectively connected with the N down-sampling modules in the feature encoder and used for outputting segmentation results of the fundus image, so that the purpose of improving the blood vessel segmentation precision of the fundus image is achieved, the features of the fundus image are reserved in the extraction process of the fundus image features, the loss of the image features is avoided, and the technical problem that the blood vessel features of the fundus image cannot be completely depicted due to low blood vessel segmentation precision of the fundus image in the related technology is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an alternative U-network for fundus image vessel segmentation according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an alternative upsampling module in accordance with embodiments of the present invention;
FIG. 3 is a schematic diagram of yet another alternative U-network for fundus image vessel segmentation in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of yet another alternative U-network for fundus image vessel segmentation in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of yet another alternative U-network for fundus image vessel segmentation in accordance with embodiments of the present invention;
fig. 6 is a schematic diagram of yet another alternative U-network for fundus image vessel segmentation according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
According to an embodiment of the present invention, there is provided a U-type network for fundus image vessel segmentation, as shown in fig. 1, including: a feature encoder 10 and a feature decoder 20, wherein:
1) the feature encoder 10 is connected with the feature decoder, and the feature encoder 10 comprises M downsampling modules 102 connected in series and used for extracting image features of the fundus image;
2) the feature decoder 20 includes N up-sampling modules 202 connected in series, and the N up-sampling modules 202 are respectively connected to the N down-sampling modules 102, and are configured to output a segmentation result of the fundus image; wherein M and N are positive integers greater than 1, and M is greater than or equal to N.
In the embodiment, the feature encoder extracts image features layer by layer through M downsampling modules, in the process of extracting the image features, high-level features of an image are gradually extracted, in the process of the feature decoder, the output features of a previous-level decoder are independently relied on, and upsampling of the output features of the previous-level decoder can cause incomplete image features, so that partial image features of a fundus image are reserved through the upsampling module in the feature encoder, and the fundus image vascular segmentation result is not accurate.
In the present embodiment, preferably, the feature encoder may use an EfficientNet series network, for example, a U-type network composed of an EfficientNet-B5 encoder and a feature decoder including 5 up-sampling modules, to perform fundus image vessel segmentation. The up-sampling module processes the image features of the down-sampling module in the feature encoder in the process of processing the image features output by the feature encoder, so that partial image features in the down-sampling module are reserved, and the effect of avoiding image feature loss is achieved.
It should be noted that, with this embodiment, a mode of combining a feature encoder and a feature decoder is adopted, and the feature encoder is connected with the feature decoder, and M down-sampling modules connected in series in the feature encoder are used for extracting image features of a fundus image; the N up-sampling modules in the feature decoder are respectively connected with the N down-sampling modules in the feature encoder and used for outputting segmentation results of the fundus image, the purpose of improving the blood vessel segmentation precision of the fundus image is achieved, the features of the fundus image are reserved in the extraction process of the fundus image features, and the loss of the image features is avoided.
Optionally, in this embodiment, the feature map size output by the downsampling module is the same as the feature map size input by the upsampling module to which the downsampling module is connected.
In a specific application scenario, in order to enable the input features of the up-sampling module to be spliced with the output features of the corresponding connected down-sampling module, the feature map size output by the down-sampling module is set to be the same as the feature map size input by the up-sampling module, and the image features are not lost.
Optionally, in this embodiment, the upsampling module includes a convolution unit, a batch normalization activation unit, and a splicing unit, where: the splicing unit is connected with the convolution unit; and the convolution unit is respectively connected with the splicing unit and the batch standardization activation unit and is used for performing convolution on the output result of the splicing unit and inputting the convolution result of the convolution unit into the batch standardization activation unit.
In this embodiment, an upsampling module in the feature decoder sequentially upsamples the image features, specifically, as shown in fig. 2, the upsampling module includes a convolution unit 22, a batch normalization activation unit 24, and a splicing unit 26, the splicing unit 26 is connected to the convolution unit 22, the convolution unit 22 is connected to the batch normalization activation unit 24, and the splicing unit 26 is configured to splice a feature map a output by the feature encoder and a feature map B output by the downsampling module corresponding to the upsampling module, and input a splicing result to the convolution unit 22.
Optionally, in this embodiment, the U-type network further includes a convolution module, where: and the convolution module is respectively connected with the feature encoder and the feature decoder.
In a specific application scenario, as shown in fig. 3, the U-type network includes a feature encoder 30, a feature encoder 32, and a convolution module 34, wherein the convolution module 34 is connected to the feature encoder 30 and the feature decoder 32, respectively. Preferably, the convolution module is a 1 × 1 convolution kernel.
Optionally, in this embodiment, the 1 st upsampling module in the feature decoder includes a first splicing unit, where the first splicing unit is configured to splice the feature map output by the convolution module and the feature map output by the downsampling module corresponding to the 1 st upsampling module; the 2 nd to N th up-sampling modules in the feature decoder comprise second splicing units, and the second splicing units are used for splicing feature maps output by down-sampling modules corresponding to the up-sampling modules where the feature decoders are located and feature maps output by adjacent up-sampling modules.
In a specific application scenario, the U-type network shown in fig. 4 includes a feature encoder 40, a feature decoder 42, and a convolution module 44, in the figure, the feature encoder 40 includes 5 down-sampling modules 400, which are respectively named 400-1, 400-2, 400-3, 400-4, and 400-5 in the connection order; there are 5 up-sampling modules 420 in the feature decoder 32, which are named as 420-1, 420-2, 420-3, 420-4, 420-5 according to the connection order, respectively, wherein the splicing unit in the up-sampling module 420-1 is the first splicing unit, and the feature map output by the down-sampling module has the same size as the feature map input by the corresponding up-sampling module. The splicing unit in the upsampling module 420-1 is configured to splice the feature map output by the downsampling module 400-5 and the feature map output by the convolution module 44, and then output a splicing result to the convolution unit and the batch normalization activation unit in the upsampling module. The splicing unit in the upsampling module 420-2 is a second splicing unit, and is used for splicing the feature maps output by the downsampling module 400-4 and the upsampling module 420-1; the splicing unit in the up-sampling module 420-3 is used for splicing the feature maps output by the down-sampling module 400-3 and the up-sampling module 420-2; the splicing unit in the up-sampling module 420-4 is used for splicing the feature maps output by the down-sampling module 400-2 and the up-sampling module 420-3; and the splicing unit in the upsampling module 420-5 is used for splicing the feature maps output by the downsampling module 400-1 and the upsampling module 420-4.
Optionally, in this embodiment, N is 5.
In a preferred embodiment, the feature encoder is an EfficientNet-B5 encoder, and the up-sampling modules are preferably set to 5 in the feature decoder.
Optionally, in this embodiment, the feature decoder further includes an output module, where the output module is configured to perform a convolution operation on the output value of the nth upsampling module and activate a sigmod operation.
In a specific application scenario, as shown in fig. 5, the U-type network includes a feature encoder 50, a feature decoder 54, and a convolution module 52, where the feature decoder 54 includes an upsampling module 540 and an output module 542, and the output module 542 is used to perform a convolution operation on the output of the upsampling module 540 and activate a sigmod operation. It should be noted that in fig. 5, the feature encoder 50 includes at least one down-sampling module, the feature decoder 54 includes at least one up-sampling module, other down-sampling modules and up-sampling modules are not shown in the figure, and the input end of the up-sampling module is the down-sampling module and convolution module in the corresponding feature decoder.
Optionally, in this embodiment, before training the U-type network, the training data set is subjected to a preset process, where the preset process includes rotation, horizontal flipping, and vertical flipping.
Specifically, before training the fundus image vessel segmentation U-type network, a training data set is prepared, for example, a neural network model is trained using clinically acquired retinal image vessel segmentation data as the training data set, and the evaluation model performance is tested using the open 400 pieces of refage data as the test set. The training data set comprises 1349 original color fundus pictures and blood vessel segmentation label maps corresponding to the original color fundus pictures in a one-to-one mode.
On the other hand, the fundus picture in the training data set is preprocessed to obtain a training example picture for inputting the model. In this step, the training set is expanded by processing each picture using various transformations in this embodiment, including rotations of 45 °, 90 °, 135 °, 180 °, 225 °, and 270 °, horizontal flipping and vertical flipping, and brightness adjustment of the image by coefficients of 0.5 and 1.3.
Optionally, in this embodiment, in the process of training the U-type network, the cross entropy loss between the probability map output by the U-type network and the real labels of the training data set is calculated, and the optimization is performed through a back propagation algorithm.
Specifically, with the U-type network model in the present embodiment, the input data is an unprocessed retinal image, and the output is a corresponding blood vessel segmentation probability map. The learning rate is initially set to 10-4, the attenuation strategy adopts poly, the momentum is 0.9, and the learning rate is exponentially attenuated.
During the testing phase: in evaluating the result of blood vessel segmentation in the fundus image, the present embodiment employs the Dice coefficient to evaluate the overall similarity between the segmentation result and the real label:
where X is a true label and Y is a fundus blood vessel segmentation prediction map, the Dice coefficient of the fundus image blood vessels in the experiment of this embodiment is calculated in the above formula.
To further illustrate the technical solution of the present embodiment, the following describes the technical solution of the present embodiment by specific embodiments:
specifically, a U-type network for fundus image vessel segmentation is shown in fig. 6, which includes a feature encoder 60(EfficientNet-B5) and a feature decoder including a convolution module 62; the up-sampling modules 64 are respectively numbered as 64-1, 64-2, 64-3, 64-4 and 64-5; the output module 66, specifically, constructs a model structure based on the deep learning framework Pytorch. The network model is directly realized by using a connection layer in a Pythrch, and the specific operation steps of the network model comprise:
step 1: and (3) a characteristic coding process: when the original image is input to the U-type network, the data first passes through the encoder EfficientNet-B5 (the dotted line frame in fig. 1 is EfficientNet-B5), and the final feature map is reduced to 32 times that of the original image after five passes of the convolution operation of convolution + batchnorm (bn) + Relu, which is denoted as F1.
Step 2: the F1 output in step 1 is subjected to convolution operation of 1 × 1 in the convolution module 62 to reduce the number of feature maps, and then sequentially flows through 5 up-sampling modules 64(DEConv), and the input of the up-sampling modules is spliced with feature maps of the same size corresponding to the EfficientNet-B5. The output signature of the last upsampling module is labeled F2.
And step 3: f2 is input into the output module 66, and a 1 × 1 convolution operation and a Sigmoid operation are performed to obtain a prediction probability map.
And 4, step 4: and calculating the cross entropy loss between the prediction probability graph and the real label, and optimizing by a back propagation algorithm.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (9)
1. A U-shaped network for fundus image vessel segmentation, comprising:
a feature encoder and a feature decoder, wherein:
the feature encoder is connected with the feature decoder and comprises M down-sampling modules connected in series and used for extracting image features of the fundus image;
the feature decoder comprises N up-sampling modules which are connected in series, and the N up-sampling modules are respectively connected with the N down-sampling modules and used for outputting segmentation results of the fundus image;
wherein M and N are positive integers greater than 1, and M is greater than or equal to N.
2. The U-network of claim 1, wherein the feature map size of the down-sampling module output is the same as the feature map size of the input of the up-sampling module to which the down-sampling module is connected.
3. The U-network according to claim 1 or 2, characterized in that said up-sampling module comprises a convolution unit, a batch normalization activation unit and a splicing unit, wherein:
the splicing unit is connected with the convolution unit;
and the convolution unit is respectively connected with the splicing unit and the batch standardization activation unit and is used for performing convolution on the output result of the splicing unit and inputting the convolution result of the convolution unit to the batch standardization activation unit.
4. The U-type network of claim 3, further comprising a convolution module, wherein:
and the convolution module is respectively connected with the feature encoder and the feature decoder.
5. The U-type network of claim 4, further comprising:
the 1 st up-sampling module in the feature decoder comprises a first splicing unit, and the first splicing unit is used for splicing the feature map output by the convolution module and the feature map output by the down-sampling module corresponding to the 1 st up-sampling module;
the 2 nd to N th up-sampling modules in the feature decoder comprise second splicing units, and the second splicing units are used for splicing feature graphs output by down-sampling modules corresponding to the up-sampling modules where the second splicing units are located and feature graphs output by adjacent up-sampling modules.
6. The U-type network of claim 1, wherein N has a value of 5.
7. The U-type network of claim 1, wherein said feature decoder further comprises an output module for performing a convolution operation on the output value of the Nth upsampling module and activating a sigmod operation.
8. U-shaped network according to claim 1, characterized in that before training said U-shaped network, a pre-set treatment of the training data set is performed, said pre-set treatment comprising rotation, horizontal flipping and vertical flipping.
9. U-type network according to claim 8 characterized in that during training of said U-type network the cross entropy loss between the probability map of the U-type network output and the real labels of the training data set is calculated and optimized by back-propagation algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911095957.8A CN110889859A (en) | 2019-11-11 | 2019-11-11 | U-shaped network for fundus image blood vessel segmentation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911095957.8A CN110889859A (en) | 2019-11-11 | 2019-11-11 | U-shaped network for fundus image blood vessel segmentation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110889859A true CN110889859A (en) | 2020-03-17 |
Family
ID=69747312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911095957.8A Pending CN110889859A (en) | 2019-11-11 | 2019-11-11 | U-shaped network for fundus image blood vessel segmentation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110889859A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669285A (en) * | 2020-12-29 | 2021-04-16 | 中山大学 | Fundus image blood vessel segmentation method based on shared decoder and residual error tower type structure |
CN115082388A (en) * | 2022-06-08 | 2022-09-20 | 哈尔滨理工大学 | Diabetic retinopathy image detection method based on attention mechanism |
CN119444743A (en) * | 2025-01-08 | 2025-02-14 | 四川农业大学 | Animal X-ray medical image data processing method and computer device |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517710A (en) * | 2011-02-07 | 2014-01-15 | 普莱希科公司 | Compounds and methods for kinase modulation and indications thereof |
CN109069668A (en) * | 2015-12-14 | 2018-12-21 | 宾夕法尼亚州大学信托人 | Gene therapy for eye disease |
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109345538A (en) * | 2018-08-30 | 2019-02-15 | 华南理工大学 | A Retinal Vessel Segmentation Method Based on Convolutional Neural Networks |
CN109615358A (en) * | 2018-11-01 | 2019-04-12 | 北京伟景智能科技有限公司 | A kind of dining room automatic settlement method and system based on deep learning image recognition |
CN109635862A (en) * | 2018-12-05 | 2019-04-16 | 合肥奥比斯科技有限公司 | Retinopathy of prematurity plus lesion classification method |
CN109740465A (en) * | 2018-12-24 | 2019-05-10 | 南京理工大学 | A Lane Line Detection Algorithm Based on Instance Segmentation Neural Network Framework |
CN109859210A (en) * | 2018-12-25 | 2019-06-07 | 上海联影智能医疗科技有限公司 | A kind of medical data processing unit and method |
CN110009095A (en) * | 2019-03-04 | 2019-07-12 | 东南大学 | Road driving area efficient dividing method based on depth characteristic compression convolutional network |
CN110110692A (en) * | 2019-05-17 | 2019-08-09 | 南京大学 | A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight |
CN110147794A (en) * | 2019-05-21 | 2019-08-20 | 东北大学 | A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning |
CN110188768A (en) * | 2019-05-09 | 2019-08-30 | 南京邮电大学 | Real-time image semantic segmentation method and system |
CN110188817A (en) * | 2019-05-28 | 2019-08-30 | 厦门大学 | A kind of real-time high-performance street view image semantic segmentation method based on deep learning |
CN110197493A (en) * | 2019-05-24 | 2019-09-03 | 清华大学深圳研究生院 | Eye fundus image blood vessel segmentation method |
-
2019
- 2019-11-11 CN CN201911095957.8A patent/CN110889859A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517710A (en) * | 2011-02-07 | 2014-01-15 | 普莱希科公司 | Compounds and methods for kinase modulation and indications thereof |
CN109069668A (en) * | 2015-12-14 | 2018-12-21 | 宾夕法尼亚州大学信托人 | Gene therapy for eye disease |
CN109345538A (en) * | 2018-08-30 | 2019-02-15 | 华南理工大学 | A Retinal Vessel Segmentation Method Based on Convolutional Neural Networks |
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109615358A (en) * | 2018-11-01 | 2019-04-12 | 北京伟景智能科技有限公司 | A kind of dining room automatic settlement method and system based on deep learning image recognition |
CN109635862A (en) * | 2018-12-05 | 2019-04-16 | 合肥奥比斯科技有限公司 | Retinopathy of prematurity plus lesion classification method |
CN109740465A (en) * | 2018-12-24 | 2019-05-10 | 南京理工大学 | A Lane Line Detection Algorithm Based on Instance Segmentation Neural Network Framework |
CN109859210A (en) * | 2018-12-25 | 2019-06-07 | 上海联影智能医疗科技有限公司 | A kind of medical data processing unit and method |
CN110009095A (en) * | 2019-03-04 | 2019-07-12 | 东南大学 | Road driving area efficient dividing method based on depth characteristic compression convolutional network |
CN110188768A (en) * | 2019-05-09 | 2019-08-30 | 南京邮电大学 | Real-time image semantic segmentation method and system |
CN110110692A (en) * | 2019-05-17 | 2019-08-09 | 南京大学 | A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight |
CN110147794A (en) * | 2019-05-21 | 2019-08-20 | 东北大学 | A kind of unmanned vehicle outdoor scene real time method for segmenting based on deep learning |
CN110197493A (en) * | 2019-05-24 | 2019-09-03 | 清华大学深圳研究生院 | Eye fundus image blood vessel segmentation method |
CN110188817A (en) * | 2019-05-28 | 2019-08-30 | 厦门大学 | A kind of real-time high-performance street view image semantic segmentation method based on deep learning |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669285A (en) * | 2020-12-29 | 2021-04-16 | 中山大学 | Fundus image blood vessel segmentation method based on shared decoder and residual error tower type structure |
CN112669285B (en) * | 2020-12-29 | 2022-03-08 | 中山大学 | Fundus image blood vessel segmentation method based on shared decoder and residual error tower type structure |
CN115082388A (en) * | 2022-06-08 | 2022-09-20 | 哈尔滨理工大学 | Diabetic retinopathy image detection method based on attention mechanism |
CN119444743A (en) * | 2025-01-08 | 2025-02-14 | 四川农业大学 | Animal X-ray medical image data processing method and computer device |
CN119444743B (en) * | 2025-01-08 | 2025-03-21 | 四川农业大学 | Animal X-ray medical image data processing method and computer device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109635862B (en) | Sorting method for retinopathy of prematurity plus lesion | |
CN111986211A (en) | Deep learning-based ophthalmic ultrasonic automatic screening method and system | |
CN114287878A (en) | Diabetic retinopathy focus image identification method based on attention model | |
CN110097554A (en) | The Segmentation Method of Retinal Blood Vessels of convolution is separated based on intensive convolution sum depth | |
CN108764342B (en) | A Semantic Segmentation Method for Optic Disc and Optic Cup in Fundus Map | |
CN109919915A (en) | Retina fundus image abnormal region detection method and device based on deep learning | |
CN110889859A (en) | U-shaped network for fundus image blood vessel segmentation | |
CN113888556B (en) | A retinal vascular image segmentation method and system based on differential attention | |
Agarwal et al. | A survey on recent developments in diabetic retinopathy detection through integration of deep learning | |
CN110610480B (en) | MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism | |
CN113012163A (en) | Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network | |
CN112580580A (en) | Pathological myopia identification method based on data enhancement and model fusion | |
CN113763292A (en) | A fundus and retinal image segmentation method based on deep convolutional neural network | |
Maher et al. | Automated diagnosis non-proliferative diabetic retinopathy in fundus images using support vector machine | |
CN115049682A (en) | Retina blood vessel segmentation method based on multi-scale dense network | |
Sallam et al. | Diabetic retinopathy grading using ResNet convolutional neural network | |
CN115409764A (en) | Multi-mode fundus blood vessel segmentation method and device based on domain self-adaptation | |
CN118134898A (en) | Global fusion type dual-channel retinal vessel segmentation method | |
Pappu et al. | EANet: Multiscale autoencoder based edge attention network for fluid segmentation from SD‐OCT images | |
Pavani et al. | Robust semantic segmentation of retinal fluids from SD-OCT images using FAM-U-Net | |
Akshita et al. | Diabetic retinopathy classification using deep convolutional neural network | |
KR102438659B1 (en) | The method for classifying diabetic macular edema and the device thereof | |
Hussein et al. | Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration | |
Nguyen et al. | Cataract detection using hybrid cnn model on retinal fundus images | |
Akhtar et al. | A framework for diabetic retinopathy detection using transfer learning and data fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200317 |