A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation
<p>The MRA-UNet architecture.</p> "> Figure 2
<p>The detailed structure of the residual attention module.</p> "> Figure 3
<p>The detailed structure of bottom reconstruction module.</p> "> Figure 4
<p>The detailed structure of spatial activation module.</p> "> Figure 5
<p>Receiver operating characteristic (ROC) curve and precision recall (PR) curve for the five models on DRIVE dataset.</p> "> Figure 6
<p>Receiver operating characteristic (ROC) curve and precision recall (PR) curve for the five models on CHASE dataset.</p> "> Figure 7
<p>Comparisons of segmentation results on DRIVE database. (<b>a</b>) Image; (<b>b</b>) ground truth; (<b>c</b>) Aslani [<a href="#B16-symmetry-13-00024" class="html-bibr">16</a>]; (<b>d</b>) ours.</p> "> Figure 8
<p>Comparisons of segmentation results on CHASE database (<b>a</b>) Image; (<b>b</b>) ground truth; (<b>c</b>) R2U-Net; (<b>d</b>) ours.</p> ">
Abstract
:1. Introduction
- We propose a multi-scale residual attention network (MRA-UNet) model to automatically segment retinal vessels. We add multi-scale inputs to the network, and use the residual attention module in the down-sampling part of the network to improve the feature extraction ability of the network structure. This improves the robustness of the model and reduces the excessive loss of micro-vascular feature information.
- In MRA-UNet, we propose a bottom reconstruction module, which combines the output of the residual attention module in the down-sampling and aggregates the output information of the down-sampling to further enrich the contextual semantic information. It eases the problem of information loss in model’s down-sampling process.
- The spatial activation module is added to the output part of the up-sampling. This module can further activate the small blood vessels in the fundus image, while restoring the image. It also effectively highlights the end of the blood vessel and the boundary information of the small blood vessels.
2. Methodology
2.1. Residual Attention Module
2.1.1. Channel Attention Module
2.1.2. Pixel Attention Module
2.2. Bottom Reconstruction Module
2.3. Spatial Activation Module
3. Datasets and Evaluation
3.1. Datasets
3.2. Experimental Environment and Parameter Settings
3.3. Performance Evaluation Indicator
4. Experiment Results and Analysis
4.1. Comparison of Results before and after Model Improvement
4.2. Model Parameter Quantity and Computation Time Analysis
4.3. Evaluation of ROC and Precision Recall (PR) Curves before and after Model Improvement
4.4. Visualization Results with Different Methods
4.5. Comparison of Segmentation Results with Different Methods
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. Blood vessel segmentation methodologies in retinal images—A survey. Comput. Methods Programs Biomed. 2012, 108, 407–433. [Google Scholar] [CrossRef]
- Abràmoff, M.D.; Folk, J.C.; Han, D.P.; Walker, J.D.; Williams, D.F.; Russell, S.R.; Massin, P.; Cochener, B.; Gain, P.; Tang, L.; et al. Automated analysis of retinal images for detection of referable diabetic retinopathy. JAMA Ophthalmol. 2013, 131, 351–357. [Google Scholar] [CrossRef] [Green Version]
- Fraz, M.M.; Barman, S.A.; Remagnino, P.; Hoppe, A.; Basit, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G. An approach to localize the retinal blood vessels using bit planes and centerline detection. Comput. Methods Programs Biomed. 2012, 108, 600–616. [Google Scholar] [CrossRef]
- Azzopardi, G.; Petkov, N. Trainable COSFIRE filters for keypoint detection and pattern recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 490–503. [Google Scholar] [CrossRef] [Green Version]
- Fathi, A.; Naghsh-Nilchi, A.R. Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation. Biomed. Signal Process. Control 2013, 8, 71–80. [Google Scholar] [CrossRef]
- Nguyen, U.T.V.; Bhuiyan, A.; Park, L.A.F.; Ramamohanarao, K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013, 46, 703–715. [Google Scholar] [CrossRef]
- Yin, X.X.; Ng, B.W.H.; He, J.; Zhang, Y.; Abbott, D. Unsupervised segmentation of blood vessels from colour retinal fundus images. In Proceedings of the International Conference on Health Information Science, HIS 2014, Shenzhen, China, 22–23 April 2014; pp. 194–203. [Google Scholar]
- Hou, Y. Automatic segmentation of retinal blood vessels based on improved multiscale line detection. J. Comput. Sci. Eng. 2014, 8, 119–128. [Google Scholar] [CrossRef] [Green Version]
- Tapamo, J.R.; Viriri, S.; Gwetu, M.V. Segmentation of retinal blood vessels using normalized Gabor filters and automatic thresholding. S. Afr. Comput. J. 2014, 55, 12–24. [Google Scholar]
- Hassan, G.; El-Bendary, N.; Hassanien, A.E.; Fahmy, A.; Shoeb, A.M.; Snasel, V. Retinal blood vessel segmentation approach based on mathematical morphology. Procedia Comput. Sci. 2015, 65, 612–622. [Google Scholar] [CrossRef] [Green Version]
- Karunanayake, N.; Kodikara, N.D. An improved method for automatic retinal blood vessel vascular segmentation using gabor filter. Open J. Med. Imaging 2015, 5, 204. [Google Scholar] [CrossRef] [Green Version]
- Singh, N.P.; Srivastava, R. Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter. Comput. Methods Programs Biomed. 2016, 129, 40–50. [Google Scholar] [CrossRef] [PubMed]
- Orlando, J.I.; Blaschko, M. Learning fully-connected CRFs for blood vessel segmentation in retinal images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Boston, MA, USA, 14–18 September 2014; pp. 634–641. [Google Scholar]
- Tang, S.; Lin, T.; Yang, J.; Fan, J.; Ai, D.; Wang, Y. Retinal vessel segmentation using supervised classification based on multi-scale vessel filtering and Gabor wavelet. J. Med. Imaging Health Inform. 2015, 5, 1571–1574. [Google Scholar] [CrossRef]
- Zhu, C.; Zou, B.; Xiang, Y.; Cui, J.; Wu, H. An ensemble retinal vessel segmentation based on supervised learning in fundus images. Chin. J. Electron. 2016, 25, 503–511. [Google Scholar] [CrossRef]
- Aslani, S.; Sarnel, H. A new supervised retinal vessel segmentation method based on robust hybrid features. Biomed. Signal Process. Control 2016, 30, 1–12. [Google Scholar] [CrossRef]
- Mo, J.; Zhang, L. Multi-level deep supervised networks for retinal vessel segmentation. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 2181–2193. [Google Scholar] [CrossRef] [PubMed]
- Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef]
- Sangeethaa, S.N.; Maheswari, P.U. An intelligent model for blood vessel segmentation in diagnosing DR using CNN. J. Med. Syst. 2018, 42, 175. [Google Scholar] [CrossRef]
- Prentašić, P.; Heisler, M.; Mammo, Z.; Lee, S.; Merkur, A.; Navajas, E.; Beg, M.F.; Šarunic, M.; Lončarić, S. Segmentation of the foveal microvasculature using deep learning networks. J. Biomed. Opt. 2016, 21, 075008. [Google Scholar] [CrossRef]
- Tan, J.H.; Acharya, U.R.; Bhandary, S.V.; Chua, K.C.; Sivaprasad, S. Segmentation of optic disc, fovea and retinal vasculature using a single convolutional neural network. J. Comput. Sci. 2017, 20, 70–79. [Google Scholar] [CrossRef] [Green Version]
- Jiang, Z.; Zhang, H.; Wang, Y.; Ko, S.-B. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput. Med. Imaging Graph. 2018, 68, 1–15. [Google Scholar] [CrossRef]
- Samuel, P.M.; Veeramalai, T. Multilevel and Multiscale Deep Neural Network for Retinal Blood Vessel Segmentation. Symmetry 2019, 11, 946. [Google Scholar] [CrossRef] [Green Version]
- Soomro, T.A.; Afifi, A.J.; Gao, J.; Hellwich, O.; Zheng, L.; Paul, M. Strided fully convolutional neural network for boosting the sensitivity of retinal blood vessels segmentation. Expert Syst. Appl. 2019, 134, 36–52. [Google Scholar] [CrossRef]
- Wu, C.; Zou, Y.; Zhan, J. DA-U-Net: Densely connected convolutional networks and decoder with attention gate for retinal vessel segmentation. Mater. Sci. Eng. 2019, 533, 012053. [Google Scholar] [CrossRef]
- Zhang, S.; Fu, H.; Yan, Y.; Zhang, Y.; Wu, Q.; Yang, M.; Tan, M.; Xu, Y. Attention guided network for retinal image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 797–805. [Google Scholar]
- Atli, İ.; Gedik, O.S. Sine-Net: A fully convolutional deep learning architecture for retinal blood vessel segmentation. Eng. Sci. Technol. Int. J. 2020. [Google Scholar]
- Pławiak, P.; Abdar, M.; Acharya, U.R. Application of new deep genetic cascade ensemble of SVM classifiers to predict the Australian credit scoring. Appl. Soft Comput. 2019, 84, 105740. [Google Scholar] [CrossRef]
- Pławiak, P.; Abdar, M.; Pławiak, J.; Makarenkov, V.; Acharya, U.R. DGHNL: A new deep genetic hierarchical network of learners for prediction of credit scoring. Inf. Sci. 2020, 516, 401–418. [Google Scholar] [CrossRef]
- Hammad, M.; Pławiak, P.; Wang, K.; Acharya, U.R. ResNet-Attention model for human authentication using ECG signals. Expert Syst. 2020, e12547. [Google Scholar] [CrossRef]
- Tuncer, T.; Ertam, F.; Dogan, S.; Aydemir, E.; Pławiak, P. Ensemble residual network-based gender and activity recognition method with signals. J. Supercomput. 2020, 76, 2119–2138. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. arXiv 2019. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
- Zhuang, J. Laddernet: Multi-path networks based on u-net for medical image segmentation. arXiv 2018, arXiv:1810.07810. [Google Scholar]
- Jiang, Y.; Zhang, H.; Tan, N.; Chen, L. Automatic Retinal Blood Vessel Segmentation Based on Fully Convolutional Neural Networks. Symmetry 2019, 11, 1112. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Sun, J. Convolutional neural networks at constrained time cost. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5353–5360. [Google Scholar]
- Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
- Lv, Y.; Ma, H.; Li, J.; Liu, S. Attention Guided U-Net With Atrous Convolution for Accurate Retinal Vessels Segmentation. IEEE Access 2020, 8, 32826–32839. [Google Scholar] [CrossRef]
Model | Accuracy | Sensitivity | Specificity | F-Measure | AUC |
---|---|---|---|---|---|
MUNet | 0.9678/0.0028 | 0.7913/0.0546 | 0.9849/0.0041 | 0.8106/0.0183 | 0.9654/0.0050 |
MUNet+RA | 0.9700/0.0036 | 0.7789/0.0536 | 0.9885/0.0031 | 0.8188/0.0156 | 0.9790/0.0057 |
MUNet+SA | 0.9701/0.0038 | 0.7730/0.0544 | 0.8183/0.0120 | 0.9797/0.0066 | |
MUNet+RA+BR | 0.7946/0.0527 | 0.9875/0.0032 | 0.8201/0.0119 | 0.9869/0.0033 | |
MUNet+RA+BR+SA | 0.9698/0.0029 | 0.9828/0.0043 |
Model | Accuracy | Sensitivity | Specificity | F-Measure | AUC |
---|---|---|---|---|---|
MUNet | 0.9629/0.0028 | 0.7965/0.0519 | 0.9850/0.0023 | 0.7869/0.0210 | 0.9810/0.0042 |
MUNet+RA | 0.9735/0.0031 | 0.8266/0.0475 | 0.9836/0.0029 | 0.7960/0.0100 | 0.9813/0.0037 |
MUNet+SA | 0.9755/0.0041 | 0.8214/0.0298 | 0.8095/0.0185 | 0.9849/0.0035 | |
MUNet+RA+BR | 0.9756/0.0035 | 0.8255/0.0368 | 0.9859/0.0020 | 0.8102/0.0178 | 0.9897/0.0030 |
MUNet+BR+RA+SA | 0.9854/0.0022 |
Accuracy | ||||
---|---|---|---|---|
MUNet:MUNet+RA | MUNet:MUNet+SA | MUNet:MUNet+RA+BR | MUNet:MUNet+BR+RA+SA | |
p value | 0.037 | 0.036 | 0.012 | 0.033 |
Accuracy | ||||
---|---|---|---|---|
MUNet:MUNet+RA | MUNet:MUNet+SA | MUNet:MUNet+RA+BR | MUNet:MUNet+BR+RA+SA | |
p value | <0.001 | <0.001 | <0.001 | <0.001 |
Type | Methods | Year | Accuracy | Sensitivity | Specificity | F-Measure | AUC | Time |
---|---|---|---|---|---|---|---|---|
Unsupervised methods | Fathi [5] | 2013 | 0.9516 | 0.7768 | 0.9759 | - | 0.9516 | 60 s |
Karunanayake [11] | 2015 | 0.9490 | 0.8163 | 0.9704 | - | - | - | |
Singh [12] | 2016 | 0.9522 | - | - | - | - | - | |
Supervised methods | Cheng [33] | 2014 | 0.9474 | 0.7252 | 0.9798 | - | 0.9648 | <60 s |
Aslani [16] | 2016 | 0.9513 | 0.7545 | 0.9801 | - | 0.9682 | 60 s | |
Mo [17] | 2017 | 0.9521 | 0.7779 | 0.9780 | - | 0.9782 | 0.40 s | |
U-Net [34] | 2018 | 0.9531 | 0.7537 | 0.9820 | 0.8142 | 0.9755 | 4.00 s | |
Residual U-Net [38] | 2018 | 0.9553 | 0.7726 | 0.9820 | 0.8149 | 0.9779 | 5.00 s | |
Samuel [23] | 2019 | 0.9609 | 0.8282 | 0.9738 | - | 0.9786 | - | |
Zhang [26] | 2019 | 0.9692 | 0.8100 | - | 0.9856 | - | ||
AG-UNet [39] | 2020 | 0.9558 | 0.7854 | 0.9810 | 0.8216 | 0.9682 | 6.00 s | |
Ours | 2020 | 0.9828 | 0.86 s |
Type | Methods | Year | Accuracy | Sensitivity | Specificity | F-Measure | AUC | Time |
---|---|---|---|---|---|---|---|---|
Unsupervised methods | Azzopardi [4] | 2015 | 0.9563 | 0.7716 | 0.9701 | - | 0.9497 | - |
Supervised methods | Jiang [22] | 2018 | 0.9668 | 0.9745 | - | 0.9810 | - | |
U-Net [38] | 2018 | 0.9578 | 0.8288 | 0.9701 | 0.7783 | 0.9772 | 8.10 s | |
Recurrent U-Net [38] | 2018 | 0.9622 | 0.7459 | 0.9836 | 0.7810 | 0.9803 | 7.50 s | |
R2U-Net [38] | 2018 | 0.9634 | 0.7756 | 0.9820 | 0.7928 | 0.9815 | 2.84 s | |
Zhang [26] | 2019 | 0.9743 | 0.8186 | 0.9848 | - | 0.9863 | - | |
Ours | 2020 | 0.8324 | 0.96 s |
Type | Methods | Year | Accuracy | Sensitivity | Specificity | F-Measure | AUC | Time |
---|---|---|---|---|---|---|---|---|
Unsupervised methods | Azzopardi [4] | 2015 | 0.9563 | 0.7716 | 0.9701 | - | 0.9497 | 11.00 s |
Fathi [5] | 2013 | 0.9591 | 0.8061 | 0.9717 | - | 0.9680 | - | |
Singh [12] | 2016 | 0.9570 | - | - | - | - | - | |
Supervised methods | Aslani [16] | 2016 | 0.9605 | 0.7556 | 0.9837 | - | 0.9789 | 60.00 s |
U-Net [38] | 2018 | 0.9690 | 0.8270 | 0.9842 | 0.8373 | 0.9898 | 7.80 s | |
Residual U-Net [38] | 2018 | 0.9700 | 0.8203 | 0.9856 | 0.8388 | 0.9904 | 8.66 s | |
Recurrent U-Net [38] | 2018 | 0.9706 | 0.8108 | 0.9871 | 0.8396 | 0.9909 | ||
Jiang [22] | 2018 | 0.9734 | 0.8352 | 0.9846 | - | 0.9900 | - | |
Samuel [23] | 2019 | 0.9646 | 0.9701 | - | 0.9892 | - | ||
Soomro [24] | 2019 | 0.9680 | 0.8480 | 0.9860 | - | 0.9880 | - | |
Atli [27] | 2020 | 0.9682 | 0.6574 | - | 0.9748 | 0.35 s | ||
Ours | 2020 | 0.8422 | 0.9873 | 1.18 s |
Image | Accuracy | Sensitivity | Specificity | F-Measure | AUC |
---|---|---|---|---|---|
0 | 0.9707 | 0.8035 | 0.9852 | 0.8140 | 0.9880 |
1 | 0.9762 | 0.8063 | 0.9883 | 0.8183 | 0.9877 |
2 | 0.9813 | 0.8400 | 0.9902 | 0.8432 | 0.9932 |
3 | 0.9679 | 0.6573 | 0.9927 | 0.7523 | 0.9881 |
4 | 0.9667 | 0.8047 | 0.9828 | 0.8134 | 0.9828 |
5 | 0.9772 | 0.8912 | 0.9836 | 0.8446 | 0.9935 |
6 | 0.9759 | 0.9795 | 0.9616 | 0.9946 | |
7 | 0.9796 | 0.8896 | 0.9869 | 0.8674 | 0.9945 |
8 | 0.9818 | 0.9028 | 0.9885 | ||
9 | 0.9751 | 0.8766 | 0.9838 | 0.8501 | 0.9923 |
10 | 0.9791 | 0.8976 | 0.9853 | 0.8594 | 0.9948 |
11 | 0.9811 | 0.9031 | 0.9876 | 0.8808 | 0.9945 |
12 | 0.9786 | 0.8763 | 0.9886 | 0.8795 | 0.9929 |
13 | 0.9784 | 0.8913 | 0.9870 | 0.8821 | 0.9942 |
14 | 0.9784 | 0.8847 | 0.9873 | 0.8766 | 0.9944 |
15 | 0.9663 | 0.7564 | 0.9902 | 0.8211 | 0.9890 |
16 | 0.9728 | 0.7869 | 0.9911 | 0.8383 | 0.9920 |
17 | 0.8257 | 0.8583 | 0.9950 | ||
18 | 0.9848 | 0.8007 | 0.9931 | 0.8196 | 0.9930 |
19 | 0.9688 | 0.8148 | 0.9799 | 0.7774 | 0.9850 |
Average |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, Y.; Yao, H.; Wu, C.; Liu, W. A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation. Symmetry 2021, 13, 24. https://doi.org/10.3390/sym13010024
Jiang Y, Yao H, Wu C, Liu W. A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation. Symmetry. 2021; 13(1):24. https://doi.org/10.3390/sym13010024
Chicago/Turabian StyleJiang, Yun, Huixia Yao, Chao Wu, and Wenhuan Liu. 2021. "A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation" Symmetry 13, no. 1: 24. https://doi.org/10.3390/sym13010024