[go: up one dir, main page]

Next Article in Journal
Can Voice Reviews Enhance Trust in Voice Shopping? The Effects of Voice Reviews on Trust and Purchase Intention in Voice Shopping
Previous Article in Journal
Control–Flight Conflict Interdependent Network Based Controllers’ Workload Prediction Evaluation Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Series-Parallel Generative Adversarial Network Architecture for Translating from Fundus Structure Image to Fluorescence Angiography

1
Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
2
Department of Biomedical Engineering, University of Science and Technology of China, Hefei 230041, China
3
School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
4
Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(20), 10673; https://doi.org/10.3390/app122010673
Submission received: 27 September 2022 / Revised: 12 October 2022 / Accepted: 20 October 2022 / Published: 21 October 2022

Abstract

:
Although fundus fluorescein angiography (FFA) is a very effective retinal imaging tool for ophthalmic diagnosis, the requirement of intravenous injection of harmful fluorescein dye limits its application. As a screening diagnostic method that reduces the frequency of intravenous injection, a series-parallel generative adversarial network (GAN) architecture for translating fundus structure image to FFA images is proposed herein, using deep learning-based software that only needs an intravenous injection for the training process. Firstly, the fundus structure image and the corresponding FFA images of three phases are collected. Secondly, our series-parallel GAN is trained to translate FFA images from fundus structure image with the supervision of FFA images. Thirdly, the trained series-parallel GAN model is used to translate FFA images by only using fundus structure image. By comparing the FFA images translated by our algorithm, Sequence GAN, pix2pix, and cycleGAN, we show the advancement of our algorithm. To further confirm the advancements of our algorithm, we evaluate the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) index, and mean-squared error (MSE) of our algorithm, Sequence GAN, pix2pix, and cycleGAN. To demonstrate the performance of our method, we show some typical FFA images translated by our algorithm.

1. Introduction

Retinal circulation imaging is critical not only for ophthalmic diagnosis but also for studying eye diseases [1,2]. The most popular ocular angiography methods include fundus fluorescein angiography (FFA) with narrow or normal field [3], wide field [4] and ultra-wide field [5,6,7], indocyanine green angiography (ICGA) [8], and optical coherence tomography angiography (OCT-A) [9]. The ICGA is primarily used for choroidal blood flow imaging in these methods, and OCT-A cannot show the blood filling process or static leakage, so FFA is the most accurate retinal circulation imaging tool. However, intravenous injection of dye is a necessity for FFA, which may cause adverse effects, including nausea, vomiting, syncope, shock, and even death in the worst-case scenario. Because of their weakened bodies, the elderly, who constitute the majority of the FFA population, are more likely to suffer adverse effects.
One way to alleviate the aforementioned problem is to reduce the frequency of FFA usage. Thus, several studies have been aimed at the development of FFA images translated from fundus structure image since 2018 with paired data [10,11,12,13,14], unpaired data [15,16,17], paired or unpaired data [18], and one-to-three data [19]. Most of these studies [10,11,12,13,14,15,16,17,18] focus on translating fundus structure image to late phase FFA images. Although the late phase FFA image is critical for ophthalmic diagnosis and disease research, whether or not the filling time of retinal vessels is normal is an important diagnostic basis for retinal vascular occlusion diseases [19], such as central retinal artery occlusion [20] and retinal vein occlusion [21].
Translating fundus structure image to more phases of FFA images is essential to provide information about the filling process of retinal vessels. One approach is to use several independent generative adversarial networks (GANs) to parallelly translate fundus structure image to their corresponding phases of FFA images. Although this method is direct and effective for this task, it does not make good use of mutual information between the different phases of FFA images, resulting in the algorithm’s low efficiency. To use the mutual information between the different phases of FFA images, Li et al. [19] proposed Sequence GAN that connects three GANs in series to translate fundus structure image to three phases of FFA image. However, the first translated FFA image still suffered from a low utilization rate of mutual information.
In this paper, we present a series-parallel GAN architecture for translating fundus structure image to fluorescence angiography to improve the utilization rate of mutual information between the different phases of FFA images. The proposed algorithm, called SPFSAT (Series-Parallel Fundus Structure to Angiography Translation), uses a series-parallel architecture-based generator and a single discriminator to achieve a high utilization rate of mutual information between the different phases of FFA images. In the series-parallel architecture-based generator, three decoders share the same encoder that extracts all the features including mutual information from the input fundus structure image at one time with the supervision of all the three phases labeled FFA images. Instead of using three parallel discriminators to judge three FFA images independently [19], the single discriminator judges the three FFA images together, which improves the utilization rate of mutual information between the three phases of FFA images. To show the advancements of our algorithm, we make a comparison among FFA images translated by our algorithm, Sequence GAN [19], pix2pix [22], and cycleGAN [23]. In detail, the typically translated FFA images are shown and the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) index [24], and mean-squared error (MSE) are evaluated. Some typical FFA images translated by our algorithm are shown to demonstrate the performance of our method.

2. Methods

This section explains our SPFSAT algorithm, which is depicted in Figure 1. An encoder is used to extract the features of the input fundus structure image, and three decoders with different weights are used to generate three different fake FFA images from the encoder’s extracted features. To discriminate the real FFA images from the fake FFA images, the fundus structure image and three-phase FFA images are inputted to a single discriminator. Thus, the single discriminator judge FFA images not only by single FFA image information but also by mutual information among fundus structure image and three-phase FFA images. The loss function is determined by adding L1 loss and adversarial loss between real FFA images and fake FFA images to optimize the network performance.

2.1. Dataset Preparation

The datasets used in this study were created using the data of 66 patients (62 patients with eye diseases and 4 normal patients) with an image size of 768 × 768 pixels using Spectralis HRA equipment (Heidelberg Engineering, Heidelberg, Germany) between August 2011 and June 2019 at the Third People’s Hospital of Changzhou (Jiangsu, China). The fundus structure images were acquired at the 785 nm wavelength with a probing power of 1.2 mW, and the FFA images were acquired at the 488 nm wavelength with a probing power of 1.4 mW. The field of views of these images included 30°, 45°, and 60°. The image capture operating procedure involved system parameters selection, head placement, alignment of the imaging system, eye fixation, focusing, and pressing the shutter. The three-phase FFA images were captured from the arterial phase (11 to 15 s), venous phase (16 to 20 s), and late phase (5 to 6 min). Thus, we created one image group, which contains fundus structure image and the corresponding three-phase FFA images, for each patient. In each group, three-phase FFA images were manually registered to fundus structure image through translation and rotation, respectively. Figure 2 shows an example of a manually registered image group that contains the fundus structure image and the corresponding three-phase FFA images.
We used 51 image groups for the training set, 2 image groups for the validation set, and the remaining 13 image groups for the test set. Because the deep learning network requires a large dataset, we augmented the registered dataset with the data. The training set was generated by randomly cropping images with the image size of 512 × 512 pixels from 51 image groups during the training process. The validation set was generated by randomly cropping 50 images with the image size of 512 × 512 pixels from 2 image groups. The test set was generated by randomly cropping 100 images with the image size of 512 × 512 pixels from 13 image groups, and final testing results with different fields of view are generated by randomly cropping from these images. Because the diseased areas appear in different locations in the FFA images and are not very large, the augmented dataset that is generated by random cropping is balanced—nearly half of the images contain diseased areas and nearly half do not.

2.2. Deep Convolutional Neural Networks

The network architecture of our encoder is showed in Figure 3. This encoder is similar to the pix2pix encoder [22]. Specifically, the input image is sequentially operated by one convolution block, two downsample blocks, and six Resnet blocks to extract the features of the input image.
The network architecture of our decoder, which is in the same style as the decoder of pix2pix [22], is shown in Figure 4. To generate fake FFA images, it applies two upsample blocks, one convolution, and one hyperbolic tangent sequentially to the features extracted by the encoder.
Figure 5 shows the network architecture of our discriminator. It is distinguished by taking as input the F a k e enated fundus structure image and the corresponding three-phase FFA images. Thus, the discriminator judge FFA images not only by single FFA image information but also by mutual information among fundus structure image and three-phase FFA images. Then, this input is operated by four convolution blocks and one convolution sequentially.

2.3. Loss Function

For training our networks to output realistic fake FFA images, our loss function is built by the addition of L1 loss and adversarial loss between real FFA images and fake FFA images:
Loss = L 1   Loss + Adversarial   Loss
L1 loss is given by
L 1   Loss = P h a s e = 1 3 i , j Fake   FFA P h a s e ( i , j ) Real   FFA P h a s e ( i , j ) 1 .
Adversarial loss is given by:
Adversarial   Loss = G 0 zeros _ like ( G 0 ) 2 + G 1 ones ( G 0 ) 2 ,
in which
G 0 = Discriminator ( concat ( Input ,   Fake   FFA 1 ,   Fake   FFA 2 , Fake   FFA 3 ) ) ,
G 1 = Discriminator ( concat ( Input , Real   FFA 1 , Real   FFA 2 , Real   FFA 3 ) ) .

2.4. Model Training and Testing

Our model was trained using registered and randomly cropped image groups with a batch size of 1 for 2000 epochs. The learning rate is 0.0002 in the first 1000 epochs and linearly decreased to 0 in the last 1000 epochs. The trained model is then used to translate fundus structure image to three-phase FFA images which are randomly cropped to generate final testing results with different field of views.

3. Results

A total of 7.3 h was required for training and 61.2 s for testing. These processes were performed on a computer with 64-bit Python, using the Ubuntu operating system with an Intel Xeon E5-2620 processor (2.10 GHz), 32 GB of memory, and an NVIDIA GeForce GTX 1080Ti graphics card.
Figure 6 depicts four typical examples of improvements achieved by our algorithm. The first and second examples are from a wide field of view, while the third and fourth are from a narrow field of view. Each example contains one input image, three output images obtained via real FFA imaging, three output images from our algorithm, three output images from Sequence GAN [19], three output images from pix2pix [22], and three output images from cycleGAN [23]. As shown in Figure 6, the FFA images translated by our algorithm show higher similarity with real FFA images than those translated by Sequence GAN [19], pix2pix [22], and cycleGAN [23]. As shown in the third phase images of the first and second examples, the FFA images translated by our algorithm, which are denoted by red square boxes, perform significantly better in the translation of optic nerve head (ONH) leakages. As a result, we qualitatively demonstrate our algorithm’s advancements.
To further validate our algorithm’s advancements, we compare FFA images translated by our algorithm to those translated by state-of-the-art algorithms [19,22,23] using PSNR, SSIM [24], and MSE. The PSNR, SSIM [24], and MSE values calculated using the entire final testing results are listed in Table 1. Here, the real FFA images are considered as the ground truth. The PSNR and SSIM values are not very high due to the fluctuations of real three-phase FFA images and limited size of our dataset. Specifically, real three-phase FFA images are fluctuated by some measuring factors, for example, heart rates. Although the limited size of our dataset affects PSNR and SSIM values, these PSNR and SSIM values can also be used to validate our algorithm’s advancements. Based on these similarity indices, we identify two facts. First, our algorithm achieves higher mean PSNR and SSIM [24], and lower mean MSE, which indicates that the FFA images translated by our algorithm are overall more similar to real FFA images than those translated by Sequence GAN [19], pix2pix [22], and cycleGAN [23]. Second, the performances of our algorithm in the three phases are more balanced than those of Sequence GAN [19], which is caused by the unbalanced mutual information utilization rate of Sequence GAN [19]. Thus, the advancements of our algorithm are confirmed.
To demonstrate the performance of our method, two representative examples of our image translation are presented in Figure 7. The first example is from a wide field of view and the second example is from a narrow field of view. Each example contains one input image, three output images from real FFA imaging, and three output images from our algorithm.
As shown in Figure 7, the FFA images translated by our algorithm show high similarity with real FFA images. As shown in the third phase images of the first example, the FFA images translated by our algorithm, which is denoted by red square boxes, perform well in the translation of optic nerve head (ONH) leakages. Thus, it is shown that our method can provide good translated three-phase FFA images from fundus structure image.

4. Discussion

We presented a series-parallel GAN architecture for translating fundus structure image to fluorescence angiography. The proposed algorithm is aimed at translating fundus structure image to three-phase FFA images. Compared with the other algorithms [19,22,23], it achieves better performance due to its high utilization rate of the mutual information. This is achieved in two ways. First, it uses a series-parallel architecture-based generator that contains one encoder and three decoders. The encoder is used to extract the features from the inputted fundus structure image including mutual information of three-phase FFA images with the supervision of all the three-phase labeled FFA images. Three different weighted decoders are used to generate the three-phase FFA images, respectively. Second, it uses a single discriminator that takes the fundus structure image and three-phase FFA images as the input and then evaluates the FFA images based not only on single FFA image information but also on mutual information among the fundus structure image and three-phase FFA images.
Here, we make a comparison between our algorithm and other algorithms [19,22,23] in more detail. Compared with the three independent parallel GAN architecture [22,23], our algorithm has three advantages. First, our algorithm has lower computation costs for both the training and testing processes, due to the sharing of the encoder and the usage of a single discriminator. Second, our algorithm requires a smaller training set, because each translated FFA image is supervised by three-phase real FFA images at the same time. Third, instead of training each phase of the FFA image independently, our model is trained by the loss function that takes account of the fundus structure image and three-phase FFA images, which makes our training process more robust. Compared with Sequence GAN [19], our algorithm also has lower computation cost due to the sharing of the encoder and the usage of a single discriminator. In addition, our algorithm series-parallel GAN architecture improves the image translation quality of the first-phase FFA image by increasing its utilization rate of mutual information.
Because the translation from fundus structure image to multiple-phase FFA images is not yet available in commercial devices, there is still room for improvement. Here, we provide two options for future research. One option is finding a new GAN architecture to further improve the utilization rate of mutual information between the different phases of FFA images. The other option is translating fundus structure image to more than three phases of FFA images, which needs to capture more phases of FFA images and create the algorithm tailored for the datasets.

5. Conclusions

An automated algorithm for translating fundus structure image to three-phase FFA images is proposed. The algorithm comprises a series-parallel architecture-based generator and a single discriminator to achieve a high utilization rate of mutual information between the different phases of FFA images. The advancements of our algorithm over other state-of-the-art algorithms [19,22,23] are confirmed via qualitative and quantitative comparisons. Based on these comparisons, we conclude that this method shows superior performance in translating fundus structure image to three-phase FFA images by achieving higher similarity with real FFA images and better ONH leakages presentation, which may be useful for standard ophthalmic examinations as a screening diagnostic method in providing the information about the filling process of retinal vessels and detecting ONH leakages.

Author Contributions

Conceptualization, Y.C.; methodology, Y.C. and Y.H.; software, Y.C.; validation, Y.C.; formal analysis, Y.C.; investigation, Y.C.; resources, Y.H. and G.S.; data curation, Y.H.; writing—original draft preparation, Y.C.; writing—review and editing, W.L., J.W., P.L., X.Z. and Y.H.; visualization, Y.C.; supervision, Y.C.; project administration, L.X. and X.Z.; funding acquisition, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City, grant number ZXL2021425; Jiangsu Province Key R&D Program, grant number BE2019682; Natural Science Foundation of Jiangsu Province, grant number BK20200214; National Key R&D Program of China, grant number 2017YFB0403701; National Natural Science Foundation of China, grant numbers 61605210, 61675226, and 62075235; Youth Innovation Promotion Association of Chinese Academy of Sciences, grant number 2019320; Frontier Science Research Project of the Chinese Academy of Sciences, grant number QYZDB-SSW-JSC03; and Strategic Priority Research Program of the Chinese Academy of Sciences, grant number XDB02060000.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are not publicly available due to them containing information that could compromise research participant privacy.

Acknowledgments

The authors wish to thank the anonymous reviewers for their valuable suggestions, and Zhoupeng Liao (Baoji People’s Hospital Baoji Emergency Medical Center, Baoji Ophthalmic Hospital, The Fifth Dinical Medical School of Medical College of Yan’an University) and Wen Kong (University of Science and Technology of China, Chinese Academy of Sciences) for their valuable suggestions to write responses to reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hayreh, S.S. Ocular vascular occlusive disorders: Natural history of visual outcome. Prog. Retin. Eye Res. 2014, 41, 1–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Flaxel, C.J.; Adelman, R.A.; Bailey, S.T.; Fawzi, A.; Lim, J.I.; Vemulakonda, G.A.; Ying, G.S. Retinal and ophthalmic artery occlusions preferred practice pattern®. Ophthalmology 2020, 127, 259–287. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Spaide, R.F.; Klancnik, J.M.; Cooney, M.J. Retinal vascular layers imaged by fluorescein angiography and optical coherence tomography angiography. JAMA Ophthalmol. 2015, 133, 45–50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Spaide, R.F. Peripheral areas of nonperfusion in treated central retinal vein occlusion as imaged by wide-field fluorescein angiography. Retina 2011, 31, 829–837. [Google Scholar] [CrossRef] [PubMed]
  5. Temkar, S.; Azad, S.V.; Chawla, R.; Damodaran, S.; Garg, G.; Regani, H.; Nawazish, S.; Raj, N.; Venkatraman, V. Ultra-widefield fundus fluorescein angiography in pediatric retinal vascular diseases. Indian J. Ophthalmol. 2019, 67, 788. [Google Scholar] [CrossRef] [PubMed]
  6. Wessel, M.M.; Nair, N.; Aaker, G.; Ehrlich, J.; Cho, M.; D’Amico, D.J.; Kiss, S. Peripheral retinal ischemia, as evaluated by ultra-widefield fluorescein angiography, is associated with macular edema in patients with diabetic retinopathy. Investig. Ophthalmol. Vis. Sci. 2011, 52, 580. [Google Scholar]
  7. Manivannan, A.; Plskova, J.; Farrow, A.; Mckay, S.; Sharp, P.F.; Forrester, J.V. Ultra-wide-field fluorescein angiography of the ocular fundus. Am. J. Ophthalmol. 2005, 140, 525–527. [Google Scholar] [CrossRef] [PubMed]
  8. Owens, S.L. Indocyanine green angiography. Br. J. Ophthalmol. 1996, 80, 263. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Chen, Y.; Hong, Y.J.; Makita, S.; Yasuno, Y. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning. Biomed. Opt. Express 2018, 9, 1111–1129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Zuckerbrod, S.; Baker, S.A.; Sanders, K.M. Fundus2Angio: A conditional GAN architecture for generating fluorescein angiography images from retinal fundus photography. In Proceedings of the International Symposium on Visual Computing, San Diego, CA, USA, 5–7 November 2020; Springer: Cham, Switzerland, 2020; pp. 125–138. [Google Scholar]
  11. Kamran, S.A.; Hossain, K.F.; Tavakkoli, A.; Zuckerbrod, S.L. Attention2angiogan: Synthesizing fluorescein angiography from retinal fundus images using generative adversarial networks. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 9122–9129. [Google Scholar]
  12. Hervella, Á.S.; Rouco, J.; Novo, J.; Ortega, M. Retinal image understanding emerges from self-supervised multimodal reconstruction. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer: Cham, Switzerland, 2018; pp. 321–328. [Google Scholar]
  13. Li, W.; Kong, W.; Chen, Y.; Wang, J.; He, Y.; Shi, G.; Deng, G. Generating fundus fluorescence angiography images from structure fundus images using generative adversarial networks. arXiv 2020, arXiv:2006.10216. [Google Scholar]
  14. Tavakkoli, A.; Kamran, S.A.; Hossain, K.F.; Zuckerbrod, S.L. A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs. Sci. Rep. 2020, 10, 21580. [Google Scholar] [CrossRef] [PubMed]
  15. Schiffers, F.; Yu, Z.; Arguin, S.; Maier, A.; Ren, Q. Synthetic fundus fluorescein angiography using deep neural networks. In Bildverarbeitung für die Medizin 2018; Springer Vieweg: Berlin/Heidelberg, Germany, 2018; pp. 234–238. [Google Scholar]
  16. Li, K.; Yu, L.; Wang, S.; Heng, P.A. Unsupervised retina image synthesis via disentangled representation learning. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Shenzhen, China, 13 October 2019; Springer: Cham, Switzerland, 2019; pp. 32–41. [Google Scholar]
  17. Cai, Z.; Xin, J.; Wu, J.; Liu, S.; Zuo, W.; Zheng, N. Triple multi-scale adversarial learning with self-attention and quality loss for unpaired fundus fluorescein angiography synthesis. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1592–1595. [Google Scholar]
  18. Hervella, Á.S.; Rouco, J.; Novo, J.; Ortega, M. Deep multimodal reconstruction of retinal images using paired or unpaired data. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  19. Li, W.; He, Y.; Kong, W.; Wang, J.; Deng, G.; Chen, Y.; Shi, G. SequenceGAN: Generating fundus fluorescence angiography sequences from structure fundus image. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Strasbourg, France, 27 September 2021; Springer: Cham, Switzerland, 2021; pp. 110–120. [Google Scholar]
  20. Varma, D.D.; Cugati, S.; Lee, A.W.; Chen, C.S. A review of central retinal artery occlusion: Clinical presentation and management. Eye 2013, 27, 688–697. [Google Scholar] [CrossRef] [PubMed]
  21. Wong, T.Y.; Scott, I.U. Retinal-vein occlusion. N. Engl. J. Med. 2010, 363, 2135–2144. [Google Scholar] [CrossRef] [PubMed]
  22. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1125–1134. [Google Scholar]
  23. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  24. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Diagram of our SPFSAT algorithm.
Figure 1. Diagram of our SPFSAT algorithm.
Applsci 12 10673 g001
Figure 2. Example of the manually registered images group containing the fundus structure image and the corresponding three-phase FFA images.
Figure 2. Example of the manually registered images group containing the fundus structure image and the corresponding three-phase FFA images.
Applsci 12 10673 g002
Figure 3. Network architecture of our encoder.
Figure 3. Network architecture of our encoder.
Applsci 12 10673 g003
Figure 4. Network architecture of our decoder.
Figure 4. Network architecture of our decoder.
Applsci 12 10673 g004
Figure 5. Network architecture of our discriminator.
Figure 5. Network architecture of our discriminator.
Applsci 12 10673 g005
Figure 6. Qualitative comparison with state-of-the-art methods using four examples. The first and the second example are from a wide field of view, and the third and fourth examples are from a narrow field of view. The images with red square boxes show our image translation performances of ONH leakages which are markedly better than the other algorithms.
Figure 6. Qualitative comparison with state-of-the-art methods using four examples. The first and the second example are from a wide field of view, and the third and fourth examples are from a narrow field of view. The images with red square boxes show our image translation performances of ONH leakages which are markedly better than the other algorithms.
Applsci 12 10673 g006
Figure 7. Two representative examples of our image translation are presented. The first example is from a wide field of view and the second example is from a narrow field of view. The images with red square boxes show that our algorithm can successfully translate ONH leakages.
Figure 7. Two representative examples of our image translation are presented. The first example is from a wide field of view and the second example is from a narrow field of view. The images with red square boxes show that our algorithm can successfully translate ONH leakages.
Applsci 12 10673 g007
Table 1. Comparison of the similarity metrics for translated FFA: overall PSNR, SSIM [24], and MSE.
Table 1. Comparison of the similarity metrics for translated FFA: overall PSNR, SSIM [24], and MSE.
PSNR ↑SSIM ↑MSE ↓
Phase 116.55510.43612206.174
Our algorithmPhase 215.85510.45362221.589
Phase 316.29780.53382003.709
Mean16.2360.47452143.824
Phase 111.01530.32995809.015
Sequence GANPhase 212.81950.40813718.937
Phase 313.41150.54663635.466
Mean12.41540.42824387.806
Phase 113.22720.31624163.63
Pix2pixPhase 214.00970.38822990.015
Phase 314.55260.53133171.523
Mean13.92980.41193441.722
Phase 110.01930.16917489.193
Cycle GANPhase 210.66040.20576165.381
Phase 311.7130.39575422.204
Mean10.79750.25686358.926
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.; He, Y.; Li, W.; Wang, J.; Li, P.; Xing, L.; Zhang, X.; Shi, G. Series-Parallel Generative Adversarial Network Architecture for Translating from Fundus Structure Image to Fluorescence Angiography. Appl. Sci. 2022, 12, 10673. https://doi.org/10.3390/app122010673

AMA Style

Chen Y, He Y, Li W, Wang J, Li P, Xing L, Zhang X, Shi G. Series-Parallel Generative Adversarial Network Architecture for Translating from Fundus Structure Image to Fluorescence Angiography. Applied Sciences. 2022; 12(20):10673. https://doi.org/10.3390/app122010673

Chicago/Turabian Style

Chen, Yiwei, Yi He, Wanyue Li, Jing Wang, Ping Li, Lina Xing, Xin Zhang, and Guohua Shi. 2022. "Series-Parallel Generative Adversarial Network Architecture for Translating from Fundus Structure Image to Fluorescence Angiography" Applied Sciences 12, no. 20: 10673. https://doi.org/10.3390/app122010673

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop