Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition
<p>Block diagram of the proposed system.</p> "> Figure 2
<p>Addition of FGSM Attacks. The first row shows the original images, while the second row represents the FGSM attacked images that mislead the model.</p> "> Figure 3
<p>Addition of FGSM attacks. The first row shows the original images, while the second row represents the SN attacked images.</p> "> Figure 4
<p>Addition of DF attacks. The first row shows the original images, while the second row indicates the DF attacked images.</p> "> Figure 5
<p>Illustration of the adversarial training process.</p> "> Figure 6
<p>ROC curve (<b>left</b>) and fusion scatter plot (<b>right</b>).</p> ">
Abstract
:1. Introduction
- We evaluate and analyze the adversarial attacks and defenses on retinal fundus images, which is considered a state-of-the-art endeavor.
- We propose a framework that contains a new SN attack, a defensive model against adversarial attacks, the adversarial training (AT), and a feature fusion strategy, which preserves the DR classification result with correct labelling.
- We achieve accurate detection of DR from retinal fundus images using the proposed feature fusion approach.
2. Related Work
Defenses against Adversarial Attacks
3. Methodology
3.1. Data Augmentation and Pre Processing
3.2. Transfer Learning
3.3. Perturbed /Adversarial Image Generation
3.3.1. Fast Gradient Sign Method (FGSM) Attack Image Generation
3.3.2. Speckle Noise (SN) Attack Image Generation
- Each pixel of the original image is composed of noise components.
- The noise of speckle is not usually distributed and similar to the Rayleigh and Gamma distributions described below:
- Noise is unavoidable in the process of data acquisition.
- Low contrast due to the variations of lighting and a variety of other causes.
- Random pixel values for individual pixels of an image can be created by multiplying speckle-noise.
3.3.3. Deep Fool Attack Generation
3.4. Proposed Defense against Adversarial Attacks
3.4.1. Adversarial Training (AT)
- Training 1: original + FGSM attacks images (AT1)
- Training 2: original + SN attacks images (AT2)
- Training 3: original + DF attacks images (AT3)
- Training 4: original + FGSM + DF images (MAT)
3.4.2. Feature Extraction and Feature Fusion Defense
3.4.3. Local Binary Pattern (LBP)
3.4.4. Histogram Oriented Gradient (HOG)
3.4.5. Segmentation Based Fractal Texture Analysis (SFTA)
3.5. Deep Feature Extraction
Feature Fusion (FF)
4. Results and Discussion
Feature Extraction and Feature Fusion Defense
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Albahli, S.; Rauf, H.T.; Arif, M.; Nafis, M.T.; Algosaibi, A. Identification of Thoracic Diseases by Exploiting Deep Neural Networks. Neural Netw. 2021, 5, 6. [Google Scholar]
- Albahli, S.; Rauf, H.T.; Algosaibi, A.; Balas, V.E. AI-driven deep CNN approach for multi-label pathology classification using chest X-Rays. PeerJ Comput. Sci. 2021, 7, e495. [Google Scholar] [CrossRef] [PubMed]
- Abdulsahib, A.A.; Mahmoud, M.A.; Mohammed, M.A.; Rasheed, H.H.; Mostafa, S.A.; Maashi, M.S. Comprehensive review of retinal blood vessel segmentation and classification techniques: Intelligent solutions for green computing in medical images, current challenges, open issues, and knowledge gaps in fundus medical images. Netw. Model. Anal. Health Inform. Bioinform. 2021, 10, 1–32. [Google Scholar]
- Canedo, D.; Neves, A.J.R. Facial Expression Recognition Using Computer Vision: A Systematic Review. Appl. Sci. 2019, 9, 4678. [Google Scholar] [CrossRef] [Green Version]
- Kour, N.; Sunanda; Arora, S. Computer-vision based diagnosis of Parkinson’s disease via gait: A survey. IEEE Access 2019, 7, 156620–156645. [Google Scholar] [CrossRef]
- Mohammed, M.A.; Elhoseny, M.; Abdulkareem, K.H.; Mostafa, S.A.; Maashi, M.S. A Multi-agent Feature Selection and Hybrid Classification Model for Parkinson’s Disease Diagnosis. ACM Trans. Multimid. Comput. Commun. Appl. 2021, 17, 1–22. [Google Scholar] [CrossRef]
- Rauf, H.T.; Lali, M.I.U.; Zahoor, S.; Shah, S.Z.H.; Rehman, A.U.; Bukhari, S.A.C. Visual features based automated identification of fish species using deep convolutional neural networks. Comput. Electron. Agric. 2019, 167, 105075. [Google Scholar] [CrossRef]
- Rauf, H.T.; Saleem, B.A.; Lali, M.I.U.; Khan, M.A.; Sharif, M.; Bukhari, S.A.C. A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data Brief 2019, 26, 104340. [Google Scholar] [CrossRef] [PubMed]
- Ahuja, A.S. The impact of artificial intelligence in medicine on the future role of the physician. PeerJ 2019, 7, e7702. [Google Scholar] [CrossRef] [PubMed]
- Van Timmeren, J.E.; Cester, D.; Tanadini-Lang, S.; Alkadhi, H.; Baessler, B. Radiomics in medical imaging—“How-to” guide and critical reflection. Insights Imaging 2020, 11, 1–16. [Google Scholar] [CrossRef]
- Mutlag, A.A.; Khanapi Abd Ghani, M.; Mohammed, M.A.; Maashi, M.S.; Mohd, O.; Mostafa, S.A.; Abdulkareem, K.H.; Marques, G.; de la Torre Díez, I. MAFC: Multi-agent fog computing model for healthcare critical tasks management. Sensors 2020, 20, 1853. [Google Scholar] [CrossRef] [Green Version]
- Lambin, P.; Leijenaar, R.T.; Deist, T.M.; Peerlings, J.; De Jong, E.E.; Van Timmeren, J.; Sanduleanu, S.; Larue, R.T.; Even, A.J.; Jochems, A. Radiomics: The bridge between medical imaging and personalized medicine. Nat. Rev. Clin. Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [PubMed]
- Kuziemsky, C.; Maeder, A.J.; John, O.; Gogia, S.B.; Basu, A.; Meher, S.; Ito, M. Role of Artificial Intelligence within the Telehealth Domain. Yearb. Med. Inform. 2019, 28, 035–040. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhou, X.; Ma, Y.; Zhang, Q.; Mohammed, M.A.; Damaševičius, R. A Reversible Watermarking System for Medical Color Images: Balancing Capacity, Imperceptibility, and Robustness. Electronics 2021, 10, 1024. [Google Scholar] [CrossRef]
- Mohammed, M.A.; Abdulkareem, K.H.; Mostafa, S.A.; Ghani, M.K.A.; Maashi, M.S.; Garcia-Zapirain, B.; Oleagordia, I.; Alhakami, H.; Al-Dhief, F.T. Voice pathology detection and classification using convolutional neural network model. Appl. Sci. 2020, 10, 3723. [Google Scholar] [CrossRef]
- Ruta, L.; Magliano, D.; Lemesurier, R.; Taylor, H.; Zimmet, P.; Shaw, J. Prevalence of diabetic retinopathy in Type 2 diabetes in developing and developed countries. Diabet. Med. 2013, 30, 387–398. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 2016, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed]
- Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Wei, W. Fuzzy based image edge detection algorithm for blood vessel detection in retinal images. Appl. Soft Comput. J. 2020, 94. [Google Scholar] [CrossRef]
- Ramasamy, L.; Padinjappurathu, S.; Kadry, S.; Damaševičius, R. Detection of diabetic retinopathy using a fusion of textural and ridgelet features of retinal images and sequential minimal optimization classifier. PeerJ Comput. Sci. 2021, 7, 456. [Google Scholar] [CrossRef]
- Tajbakhsh, N.; Jeyaseelan, L.; Li, Q.; Chiang, J.N.; Wu, Z.; Ding, X. Embracing imperfect datasets: A review of deep learning solutions for medical image segmentation. Med. Image Anal. 2020, 63, 101693. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Karimi, D.; Dou, H.; Warfield, S.K.; Gholipour, A. Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. Med. Image Anal. 2020, 65, 101759. [Google Scholar] [CrossRef] [PubMed]
- Qiu, S.; Liu, Q.; Zhou, S.; Wu, C. Review of Artificial Intelligence Adversarial Attack and Defense Technologies. Appl. Sci. 2019, 9, 909. [Google Scholar] [CrossRef] [Green Version]
- Gluck, T.; Kravchik, M.; Chocron, S.; Elovici, Y.; Shabtai, A. Spoofing Attack on Ultrasonic Distance Sensors Using a Continuous Signal. Sensors 2020, 20, 6157. [Google Scholar] [CrossRef]
- Zhou, X.; Xu, M.; Wu, Y.; Zheng, N. Deep Model Poisoning Attack on Federated Learning. Future Internet 2021, 13, 73. [Google Scholar] [CrossRef]
- Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; Mukhopadhyay, D. Adversarial attacks and defences: A survey. arXiv 2018, arXiv:1810.00069. [Google Scholar]
- Edwards, D.; Rawat, D.B. Study of Adversarial Machine Learning with Infrared Examples for Surveillance Applications. Electronics 2020, 9, 1284. [Google Scholar] [CrossRef]
- Ren, K.; Zheng, T.; Qin, Z.; Liu, X. Adversarial Attacks and Defenses in Deep Learning. Engineering 2020, 6, 346–360. [Google Scholar] [CrossRef]
- Nazemi, A.; Fieguth, P. Potential adversarial samples for white-box attacks. arXiv 2019, arXiv:1912.06409. [Google Scholar]
- Lin, J.; Xu, L.; Liu, Y.; Zhang, X. Black-box adversarial sample generation based on differential evolution. J. Syst. Softw. 2020, 170, 110767. [Google Scholar] [CrossRef]
- Alzantot, M.; Sharma, Y.; Chakraborty, S.; Zhang, H.; Hsieh, C.J.; Srivastava, M.B. Genattack: Practical black-box attacks with gradient-free optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Prague, Czech Republic, 13–17 July 2019; pp. 1111–1119. [Google Scholar]
- Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Nair, V.; Hinton, G. CIFAR-10; Canadian Institute for Advanced Research: Toronto, ON, Canada, 2009. [Google Scholar]
- Gao, X.; Tan, Y.A.; Jiang, H.; Zhang, Q.; Kuang, X. Boosting targeted black-box attacks via ensemble substitute training and linear augmentation. Appl. Sci. 2019, 9, 2286. [Google Scholar] [CrossRef] [Green Version]
- Tabacof, P.; Tavares, J.; Valle, E. Adversarial images for variational autoencoders. arXiv 2016, arXiv:1612.00155. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. arXiv 2016, arXiv:1611.01236. [Google Scholar]
- Gu, S.; Rigazio, L. Towards deep neural network architectures robust to adversarial examples. arXiv 2014, arXiv:1412.5068. [Google Scholar]
- Siddique, A.; Browne, W.N.; Grimshaw, G.M. Lateralized learning for robustness against adversarial attacks in a visual classification system. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference, Cancún, Mexico, 8–12 July 2020; pp. 395–403. [Google Scholar]
- Huq, A.; Pervin, M. Adversarial Attacks and Defense on Textual Data: A Review. arXiv 2020, arXiv:2005.14108. [Google Scholar]
- Zhang, J.; Sang, J.; Zhao, X.; Huang, X.; Sun, Y.; Hu, Y. Adversarial Privacy-preserving Filter. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1423–1431. [Google Scholar]
- Wang, Y.; Wang, K.; Zhu, Z.; Wang, F.Y. Adversarial attacks on Faster R-CNN object detector. Neurocomputing 2020, 382, 87–95. [Google Scholar] [CrossRef]
- Li, Y.; Zhu, Z.; Zhou, Y.; Xia, Y.; Shen, W.; Fishman, E.K.; Yuille, A.L. Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-Fine Framework and Its Adversarial Examples. In Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics; Springer: Cham, Switzerland, 2019; pp. 69–91. [Google Scholar]
- Zhang, W.E.; Sheng, Q.Z.; Alhazmi, A.; Li, C. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Trans. Intell. Syst. Technol. 2020, 11, 1–41. [Google Scholar]
- Yu, Y.; Lee, H.J.; Kim, B.C.; Kim, J.U.; Ro, Y.M. Investigating Vulnerability to Adversarial Examples on Multimodal Data Fusion in Deep Learning. arXiv 2020, arXiv:2005.10987. [Google Scholar]
- Raval, N.; Verma, M. One word at a time: Adversarial attacks on retrieval models. arXiv 2020, arXiv:2008.02197. [Google Scholar]
- Levine, A.; Feizi, S. (De) Randomized Smoothing for Certifiable Defense against Patch Attacks. arXiv 2020, arXiv:2002.10733. [Google Scholar]
- Wang, H.; Wang, G.; Li, Y.; Zhang, D.; Lin, L. Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 342–351. [Google Scholar]
- Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Adversarial attacks on deep neural networks for time series classification. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Yang, Z.; Zhao, Y.; Yan, W. Adversarial Vulnerability in Doppler-based Human Activity Recognition. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
- Dong, Y.; Su, H.; Wu, B.; Li, Z.; Liu, W.; Zhang, T.; Zhu, J. Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 7714–7722. [Google Scholar]
- Hafemann, L.G.; Sabourin, R.; Oliveira, L.S. Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2153–2166. [Google Scholar] [CrossRef] [Green Version]
- García, J.; Majadas, R.; Fernández, F. Learning adversarial attack policies through multi-objective reinforcement learning. Eng. Appl. Artif. Intell. 2020, 96, 104021. [Google Scholar] [CrossRef]
- Zahoor, S.; Lali, I.U.; Khan, M.A.; Javed, K.; Mehmood, W. Breast cancer detection and classification using traditional computer vision techniques: A comprehensive review. Curr. Med. Imaging 2020, 16, 1187–1200. [Google Scholar] [CrossRef]
- Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
- Saman, G.; Gohar, N.; Noor, S.; Shahnaz, A.; Idress, S.; Jehan, N.; Rashid, R.; Khattak, S.S. Automatic detection and severity classification of diabetic retinopathy. Multimed. Tools Appl. 2020, 79, 31803–31817. [Google Scholar] [CrossRef]
- Cheng, Y.; Juefei-Xu, F.; Guo, Q.; Fu, H.; Xie, X.; Lin, S.W.; Lin, W.; Liu, Y. Adversarial Exposure Attack on Diabetic Retinopathy Imagery. arXiv 2020, arXiv:2009.09231. [Google Scholar]
- Hirano, H.; Minagi, A.; Takemoto, K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging 2021, 21, 1–13. [Google Scholar] [CrossRef]
- Kang, X.; Song, B.; Du, X.; Guizani, M. Adversarial Attacks for Image Segmentation on Multiple Lightweight Models. IEEE Access 2020, 8, 31359–31370. [Google Scholar] [CrossRef]
- Pineda, L.; Basu, S.; Romero, A.; Calandra, R.; Drozdzal, M. Active MR k-space sampling with reinforcement learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 23–33. [Google Scholar]
- Chen, C.; Qin, C.; Qiu, H.; Ouyang, C.; Wang, S.; Chen, L.; Tarroni, G.; Bai, W.; Rueckert, D. Realistic adversarial data augmentation for MR image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 667–677. [Google Scholar]
- Liu, S.; Setio, A.A.A.; Ghesu, F.C.; Gibson, E.; Grbic, S.; Georgescu, B.; Comaniciu, D. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks. arXiv 2020, arXiv:2003.03824. [Google Scholar] [CrossRef] [PubMed]
- Paul, R.; Schabath, M.; Gillies, R.; Hall, L.; Goldgof, D. Mitigating adversarial attacks on medical image understanding systems. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1517–1521. [Google Scholar]
- Ding, Y.; Wu, G.; Chen, D.; Zhang, N.; Gong, L.; Cao, M.; Qin, Z. DeepEDN: A Deep Learning-based Image Encryption and Decryption Network for Internet of Medical Things. arXiv 2020, arXiv:2004.05523. [Google Scholar] [CrossRef]
- Anand, D.; Tank, D.; Tibrewal, H.; Sethi, A. Self-Supervision vs. Transfer Learning: Robust Biomedical Image Analysis Against Adversarial Attacks. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1159–1163. [Google Scholar]
- Sharma, Y.; Chen, P.Y. Attacking the Madry Defense Model with L1-based Adversarial Examples. arXiv 2017, arXiv:1710.10733. [Google Scholar]
- Liu, N.; Du, M.; Guo, R.; Liu, H.; Hu, X. Adversarial Machine Learning: An Interpretation Perspective. ACM SIGKDD Explor. Newsl. 2021, 23, 86–99. [Google Scholar] [CrossRef]
- Agarwal, A.; Singh, R.; Vatsa, M.; Ratha, N.K. Image transformation based defense against adversarial perturbation on deep learning models. IEEE Trans. Dependable Comput. Secur. 2020. [Google Scholar] [CrossRef]
- Huang, X.; Kroening, D.; Ruan, W.; Sharp, J.; Sun, Y.; Thamo, E.; Wu, M.; Yi, X. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput. Sci. Rev. 2020, 37, 100270. [Google Scholar] [CrossRef]
- Meng, D.; Chen, H. Magnet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 135–147. [Google Scholar]
- Bai, Y.; Feng, Y.; Wang, Y.; Dai, T.; Xia, S.T.; Jiang, Y. Hilbert-based generative defense for adversarial examples. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 4784–4793. [Google Scholar]
- McCoyd, M.; Wagner, D. Background class defense against adversarial examples. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; pp. 96–102. [Google Scholar]
- Kabilan, V.M.; Morris, B.; Nguyen, H.P.; Nguyen, A. Vectordefense: Vectorization as a defense to adversarial examples. In Soft Computing for Biomedical Applications and Related Topics; Springer: Cham, Switzerland, 2018; pp. 19–35. [Google Scholar]
- Athalye, A.; Carlini, N.; Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv 2018, arXiv:1802.00420. [Google Scholar]
- Tripathi, A.M.; Mishra, A. Fuzzy Unique Image Transformation: Defense against Adversarial Attacks on Deep COVID-19 Models. arXiv 2020, arXiv:2009.04004. [Google Scholar]
- Xu, W.; Evans, D.; Qi, Y. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv 2017, arXiv:1704.01155. [Google Scholar]
- Liu, C.; Ye, D. Defend Against Adversarial Samples by Using Perceptual Hash. Comput. Mater. Contin. 2020, 62, 1365–1386. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Zheng, H.; Zhang, Z.; Gu, J.; Lee, H.; Prakash, A. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1181–1190. [Google Scholar]
- Moosavi-Dezfooli, S.; Fawzi, A.; Frossard, P.; Deepfool. A simple and accurate method to fool deep neural networks. In Proceedings of the CVPR, Boston, MA, USA, 8–10 June 2015; pp. 2574–2582. [Google Scholar]
- Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of Oriented Gradients for Human Detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
- Costa, A.F.; Humpire-Mamani, G.; Traina, A.J.M. An Efficient Algorithm for Fractal Analysis of Textures. In Proceedings of the 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images, Ouro Preto, Brazil, 22–25 August 2012. [Google Scholar]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
Reference | Methodology | Dataset | Evaluation Measures | Results |
---|---|---|---|---|
[54] | Morphological operation | DiaretDB | SVM classifier | Mild severity DR detection and classification |
[55] | Bracketed Exposure Fusion (BEF) and Convolution Bracketed Exposure Fusion (CBEF) Attacks | Eyepacs | Component-wise multiplicative fusion and element-wise convolutional | DR detection |
[56] | Iterative algorithms for universal perturbations attacks (UPA) | Multiple datasets | Classification of targeted and non-targeted UPA attacks | 80% Accuracy |
[57] | Local perturbation and universal attacks, | Cityscapes | Noise function and Gradient of pixels | Image Segmentation |
[58] | Reinforcement learning, Markov Decision Process | MRI single-coil knee dataset | MSE, NMSE, SSIM and PSNR | MRI phase-encoding sampling |
[64] | Adversarial training by modelling intensity inhomogeneities | Automated Cardiac Diagnosis Challenge (ACDC) | Low-shot learning, learning from limited population | Semantic features for cardiac image segmentation |
[60] | Projected gradient descent (PGD), adverse synthetic nodule and adverse perturbation | CT data | False positive reduction rate | Lung nodule detection and prediction of lung by false positive reduction (FPR) |
[61] | Fast Gradient Sign Method (FGSM) and one-pixel attacks | National Lung Screening Trial (NLST) dataset | Ensemble-based classification | Malignancy prediction of lung nodules. 1-pixel attack with 82.27% and 81.43% |
[62] | Cycle-generative adversarial network (Cycle-GAN) | Chest X-ray data set | X-ray dataset through ROI (region of interest) | Encrypting and decrypting the medical image through DeepEDN |
[63] | Self-supervised transfer learning combined with adversarial training | Chest X-rays, and segmentation of MRI images. | MRI segmentation using two PGD, and fast gradient single method | Pneumonia classification of x-ray images and MRI segmentation |
[65] | Untargeted vs Targeted Attack, One-Shot vs Iterative Attack | Fashion-MNIST dataset | Feature-level interpretation and model-level interpretation | Defensive graph-based models, causal models generated |
[66] | Discrete Wavelet Transform and Discrete Sine Transform | Object database (validation set of ImageNet) and face recognition (MBGC) | SVM Classifier | Defense through which adversarial perturbation can be neutralized |
[67] | Dimensionality reduction, a characterization of the adversarial region, | Multiple dataset | Combining input discretization with adversarial training | Activation transformations for the best and robust defense against these attacks |
[68] | MagNet with Randomization | Adversarial examples (AEs) on a manifold and normal examples. | MagNet DNN classifier | 3% higher than simple MagNet. |
[69] | Hilbert-based Generative pixel CNN Hilbert-based PixelDefend (HPD) | Adversarial examples (AEs) | Ensemble of Hilbert curve with different orientations. | PixelDefend mapping pixels from 2-D to 1-D. |
[70] | Crafts attacks, Background class image classification training | EMNIST Dataset | Weak or small adversarial attacks samples based | Constructing background images between the key classes and artificially expanding the background data |
[71] | Protrace vectorization algorithms | MNIST handwritten digits dataset | In high-dimensional color image space, simple image tracing may not yield compact and interpretable elements. | the vector images are resolution-independent, one could rasterize them back into much smaller-sized images. |
[72] | Obfuscated Gradients, iterative optimization-based attacks, | ICLR 2018 | False sensitive security | Prevent gradient descent-based attacks) for perceived robustness |
[64] | Mary EAD elastic-net attack with L∞ | MNIST digits dataset | local first-order information, Minimum distortion | EAD is able to outperform PGD in transferring in the targeted case. |
[73] | Fuzzy Unique Image Transformation (FUIT) | Chest x-ray and CT image dataset | that downsamples the image pixels into an interval. | Diagnosis of COVID-19 through DNN model. |
[74] | Feature Squeezing | MNIST, CIFAR-10, ImageNet | Joint detection with multiple squeezers, adversarial adaptation | Color depth reduction, median smoothing. non local smoothing |
[75] | Perceptual hash | CIFAR-10 | JSMA gradient-based attack, One Pixel Attack is an evolutionary-based attack | White-box attack success rate 36.3%, and in black-box attack 72.8% |
Original Class Label | Attacks Applied | Predicted Label After Attack | Accuracy with Class DR1 (%) | Accuracy with class DR2 (%) | Accuracy with Class DR3 (%) |
---|---|---|---|---|---|
DR1 | FGSM | DR2 | 0 | 93.01 | 6.99 |
DR2 | FGSM | DR1 | 81.71 | 0 | 18.29 |
DR3 | FGSM | DR2 | 0 | 91.09 | 8.91 |
DR1 | SN | DR2 | 12.27 | 87.98 | 0 |
DR2 | SN | DR3 | 10.09 | 10.82 | 79.09 |
DR3 | SN | DR1 | 89.82 | 10.18 | 0 |
DR1 | DF | DR2 | 0 | 100 | 0 |
DR2 | DF | DR1 | 82 | 17.59 | 0.41 |
DR3 | DF | DR2 | 0 | 99.75 | 0.41 |
Original Class Label | Attacks Applied | Predicted Label After Attack | Accuracy with Class DR1 (%) | Accuracy with Class DR2 (%) | Accuracy with Class DR3 (%) |
---|---|---|---|---|---|
DR1 | FGSM | DR1 | 88.86 | 0 | 11.14 |
DR2 | FGSM | DR2 | 20 | 72.07 | 7.93 |
DR3 | FGSM | DR3 | 10.01 | 8.94 | 81.05 |
DR1 | SN | DR2 | 40 | 57.23 | 2.77 |
DR2 | SN | DR1 | 70 | 0 | 30 |
DR3 | SN | DR3 | 0 | 0 | 40 |
DR1 | DF | DR3 | 0 | 40.97 | 58.50 |
DR2 | DF | DR1 | 89.79 | 0.21 | 10.0 |
DR3 | DF | DR3 | 0 | 35.32 | 64.51 |
Original Class Label | Attacks Applied | Predicted Label After Attack | Accuracy with Class DR1 (%) | Accuracy with Class DR2 (%) | Accuracy with Class DR3 (%) |
---|---|---|---|---|---|
DR1 | FGSM | DR3 | 2.71 | 20.3 | 76.98 |
DR2 | FGSM | DR2 | 0 | 62.37 | 37.26 |
DR3 | FGSM | DR1 | 81.34 | 4.7 | 13.96 |
DR1 | SN | DR1 | 98.02 | 1.98 | 0 |
DR2 | SN | DR2 | 0.02 | 88.66 | 11.32 |
DR3 | SN | DR3 | 0 | 5.98 | 94.07 |
DR1 | DF | DR2 | 14.62 | 71.8 | 13.58 |
DR2 | DF | DR3 | 0 | 10.15 | 89.95 |
DR3 | DF | DR1 | 82.75 | 1.8 | 15.45 |
Original Class Label | Attacks Applied | Predicted Label After Attack | Accuracy with Class DR1 (%) | Accuracy with Class DR2 (%) | Accuracy with Class DR3 (%) |
---|---|---|---|---|---|
DR1 | FGSM | DR1 | 68.05 | 20.97 | 20.09 |
DR2 | FGSM | DR3 | 29.91 | 10.91 | 50.09 |
DR3 | FGSM | DR3 | 3.90 | 35.32 | 74.51 |
DR1 | SN | DR3 | 39.91 | 0 | 60.09 |
DR2 | SN | DR2 | 9.87 | 90.02 | 0.01 |
DR3 | SN | DR3 | 0 | 45.32 | 54.51 |
DR1 | DF | DR1 | 82.97 | 17.02 | 0 |
DR2 | DF | DR2 | 0.04 | 99.96 | 0 |
DR3 | DF | DR3 | 0 | 0 | 100 |
Original Class Label | Attacks Applied | Predicted Label After Attack | Accuracy with Class DR1 (%) | Accuracy with Class DR2 (%) | Accuracy with Class DR3 (%) |
---|---|---|---|---|---|
DR1 | FGSM | DR1 | 99.83 | 0.11 | 0 |
DR2 | FGSM | DR2 | 23.52 | 74.94 | 1.54 |
DR3 | FGSM | DR3 | 2.89 | 0.02 | 97.09 |
DR1 | SN | DR1 | 94.46 | 4.83 | 0.71 |
DR2 | SN | DR2 | 0 | 99 | 1 |
DR3 | SN | DR3 | 17.9 | 0 | 82.09 |
DR1 | DF | DR1 | 96.51 | 0 | 3.29 |
DR2 | DF | DR2 | 0 | 100 | 0 |
DR3 | DF | DR3 | 0 | 0.01 | 99.99 |
Training Dataset | Testing Dataset | Correct Label Prediction % |
---|---|---|
Original Dataset | Original dataset | 100% |
Original Dataset | FGSM Attacked Dataset | 0% |
Original Dataset | SN Attacked Dataset | 10% |
Original Dataset | DF Attacked Dataset | 0% |
Adversarial Training (AT1) | Original+ FGSM | 62% |
Adversarial Training (AT2) | Original+SN | 52% |
Adversarial Training (AT3) | Original+DF | 66% |
Adversarial Training (Mixed Data MAT) | Original+FGSM+SN+DF | 92% |
Model | SVM | KNN (Cubic) | Ensemble |
---|---|---|---|
DarkNet-53 | 80.9% | 79.6% | 90.3% |
HOG+SFTA+LBP | 82.3% | 84.1% | 85.5% |
Proposed Model | 99.9% | 99.5% | 99.9% |
Class | No of Instances | Accuracy (%) | Precision | Recall | F1-Score |
---|---|---|---|---|---|
DR1 | 2543 | 99.94 | 1.0 | 1.0 | 1.0 |
DR2 | 2509 | 99.95 | 1.0 | 1.0 | 1.0 |
DR3 | 1591 | 99.98 | 1.0 | 1.0 | 1.0 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lal, S.; Rehman, S.U.; Shah, J.H.; Meraj, T.; Rauf, H.T.; Damaševičius, R.; Mohammed, M.A.; Abdulkareem, K.H. Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition. Sensors 2021, 21, 3922. https://doi.org/10.3390/s21113922
Lal S, Rehman SU, Shah JH, Meraj T, Rauf HT, Damaševičius R, Mohammed MA, Abdulkareem KH. Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition. Sensors. 2021; 21(11):3922. https://doi.org/10.3390/s21113922
Chicago/Turabian StyleLal, Sheeba, Saeed Ur Rehman, Jamal Hussain Shah, Talha Meraj, Hafiz Tayyab Rauf, Robertas Damaševičius, Mazin Abed Mohammed, and Karrar Hameed Abdulkareem. 2021. "Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition" Sensors 21, no. 11: 3922. https://doi.org/10.3390/s21113922