Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution
<p>The flow chart of image adversarial example generation.</p> "> Figure 2
<p>The process of finding adversarial perturbations using the adaptive parameter adjustable differential evolution method.</p> "> Figure 3
<p>The trend of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>μ</mi> </mrow> <mrow> <mi>F</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>μ</mi> </mrow> <mrow> <mi>C</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics></math> with the number of population iterations.</p> "> Figure 4
<p>The adversarial examples generated by our proposed method. By perturbing a very few number of the images’ pixels, our method successfully fooled three neural networks—ResNet, NinN, and VGG16. (<b>a</b>) Adversarial examples generated for the CIFAR10. (<b>b</b>) Adversarial examples generated for the MNIST.</p> "> Figure 5
<p>Heat maps of the number of times each class in the CIFAR10 was perturbed to other classes when attacking the networks. (<b>a</b>) Heat map of the attack on ResNet. (<b>b</b>) Heat map of the attack on NinN. (<b>c</b>) Heat map of the attack on VGG16.</p> "> Figure 6
<p>Heat maps of the number of times each class in the MNIST was perturbed to other classes when attacking the networks. (<b>a</b>) Heat map of the attack on ResNet. (<b>b</b>) Heat map of the attack on NinN. (<b>c</b>) Heat map of the attack on VGG16.</p> "> Figure 7
<p>The success rate of our method compared with OPA on CIFAR10-based ResNet and NinN.</p> ">
Abstract
:1. Introduction
- An image adversarial example generation method based on the DE is proposed in the black-box environment, which can achieve a higher attack success rate with only very few perturbations on the image.
- An adaptive parameter adjustable differential evolution algorithm is proposed to find the optimal perturbation, which realizes the adaptive adjustment of the DE’s control parameters and operation strategies, and satisfies the dynamic requirements at different stages, so the optimal perturbation is obtained with a higher probability.
- The experiments are conducted to confirm the efficacy of the proposed method. The results demonstrated that, compared to the OPA, our method can efficiently generate more adversarial examples. In particular, when expanded to three-pixel and five-pixel attacks, it significantly raises the attack success rate. In addition, the perturbation rate required by the proposed method is substantially lower than that of global or local perturbation attacks. The capacity to resist detection and perception in physical environments is further improved.
2. Related Work
3. Problem Description
4. Proposed Method
4.1. Initialization
4.2. Adaptive Mutation
4.3. Adaptive Crossover
4.4. Selection
Algorithm 1: Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution |
|
5. Experiment and Analysis
5.1. Experimental Setup
5.2. Analyze the Success Rate of Attack
5.3. Analyze the Sensitivity of Attack
5.4. Comparison of Experimental Results
- Our method does not use gradient information for the optimization and does not require the objective function to be differentiable or previously mastered. Therefore, it belongs to the black-box attack and is more practical than gradient-based methods, in reality.
- Compared with gradient descent or greedy search algorithms, our method is relatively less affected by local optima and can find the global optimal perturbation with a higher probability.
6. Conclusions
- There are numerous variants of the DE, some of which enhance the variation strategy mechanism [45,46], and combine the DE with other intelligent algorithms [47,48]. If the appropriate DE variants are selected in the context of certain issues, it would be possible to achieve adversarial attacks that are more effective and precise.
- Of course, adversarial defense will also be a key area of study in the future. The majority of the conventional defense strategies have either been successfully cracked or proven ineffective [49,50,51]. Adversarial example detection techniques, which are a supplementary defense strategy, fail to completely distinguish the original samples from adversarial examples [52].
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, H.; Zhao, B.; Huang, L.; Gou, J.; Liu, Y. FoolChecker: A platform to evaluate the robustness of images against adversarial attacks. Neurocomputing 2020, 412, 216–225. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial examples in the physical world. Artificial Intelligence Safety and Security. arXiv 2018, arXiv:1607.02533. [Google Scholar]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9185–9193. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. A simple and accurate method to fool deep neural networks. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar] [CrossRef] [Green Version]
- Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z.B.; Swami, A. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy, Saarbruecken, Germany, 21–24 March 2016; IEEE: New York, NY, USA, 2016; pp. 372–387. [Google Scholar] [CrossRef] [Green Version]
- Phan, H.; Xie, Y.; Liao, S.; Chen, J.; Yuan, B. Cag: A real-time low-cost enhanced-robustness high-transferability content-aware adversarial attack generator. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 5412–5419. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–26 May 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 39–57. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Benz, P.; Imtiaz, T.; Kweon, I.S. Understanding adversarial examples from the mutual influence of images and perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14521–14530. [Google Scholar] [CrossRef]
- Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z.B.; Swami, A. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates, 2–6 April 2017; pp. 506–519. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; He, X.; Wang, J.; He, K. Admix: Enhancing the transferability of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 16158–16167. [Google Scholar] [CrossRef]
- Gao, L.; Zhang, Q.; Song, J.; Liu, X.; Shen, H. Patch-wise attack for fooling deep neural network. In Computer Vision–ECCV 2020, 16th European Conference, Glasgow, UK, 23–28 August 2020; Part XXVIII 16; Springer International Publishing: New York, NY, USA, 2020; pp. 307–322. [Google Scholar] [CrossRef]
- Yuan, Z.; Zhang, J.; Jia, Y.; Tan, C.; Xue, T.; Shan, S. Meta gradient adversarial attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 7748–7757. [Google Scholar] [CrossRef]
- Wu, W.; Su, Y.; Lyu, M.R.; King, I. Improving the transferability of adversarial samples with adversarial transformations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9024–9033. [Google Scholar] [CrossRef]
- Zhou, M.; Wu, J.; Liu, Y.; Liu, S.; Zhu, C. DaST: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 234–243. [Google Scholar] [CrossRef]
- Wang, W.; Yin, B.; Yao, T.; Zhang, L.; Fu, Y.; Ding, S.; Li, J.; Huang, F.; Xue, X. Delving into data: Effectively substitute training for black-box attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4761–4770. [Google Scholar] [CrossRef]
- Narodytska, N.; Kasiviswanathan, S.P. Simple black-box adversarial attacks on deep neural networks. In Proceedings of the Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1310–1318. [Google Scholar] [CrossRef]
- Su, J.; Vargas, D.V.; Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar] [CrossRef] [Green Version]
- Ding, K.; Liu, X.; Niu, W.; Hu, T.; Wang, Y.; Zhang, X. A low-query black-box adversarial attack based on transferability. Knowl. Based Syst. 2021, 226, 107102. [Google Scholar] [CrossRef]
- Lin, J.; Song, C.; He, K.; Wang, L.; Hopcroft, J.E. Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv 2020, arXiv:1908.06281. [Google Scholar]
- Wang, L.; Zhang, H.; Yi, J.; Hsieh, C.-J.; Jiang, Y. Spanning attack: Reinforce black-box attacks with unlabeled data. Mach. Learn. 2020, 109, 2349–2368. [Google Scholar] [CrossRef]
- Rahmati, A.; Moosavi-Dezfooli, S.M.; Frossard, P.; Dai, H. Geoda: A geometric framework for black-box adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8446–8455. [Google Scholar] [CrossRef]
- Shi, Y.; Han, Y.; Tian, Q. Polishing decision-based adversarial noise with a customized sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1030–1038. [Google Scholar] [CrossRef]
- Chen, J.; Jordan, M.I.; Wainwright, M.J. Hop skip jump attack: A query-efficient decision-based attack. In Proceedings of the 2020 IEEE Symposium on Security and Privacy, San Francisco, CA, USA, 18–21 May 2020; IEEE: New York, NY, USA, 2020; pp. 1277–1294. [Google Scholar] [CrossRef]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21-26 July 2017; pp. 1765–1773. [Google Scholar] [CrossRef] [Green Version]
- Sarkar, S.; Bansal, A.; Mahbub, U.; Chellappa, R. UPSET and ANGRI: Breaking high performance image classifiers. Computing Research Repository. arXiv 2017, arXiv:1707.01159. [Google Scholar]
- Feng, Y.; Wu, B.; Fan, Y.; Liu, L.; Li, Z.; Xia, S. Efficient black-box adversarial attack guided by the distribution of adversarial perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
- Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
- Su, J.; Vargas, D.V.; Sakurai, K. Attacking convolutional neural network using differential evolution. IPSJ Trans. Comput. Vis. Appl. 2019, 11, 5. [Google Scholar] [CrossRef] [Green Version]
- Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
- Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
- Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE: New York, NY, USA, 2013; pp. 71–78. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C. Theory and Application of Differential Evolutionary Algorithms; Beijing University of Technology Press: Beijing, China, 2014; ISBN 9787564062248. [Google Scholar]
- Kushida, J.; Hara, A.; Takahama, T. Generation of adversarial examples using adaptive differential evolution. Int. J. Innov. Comput. Inf. Control. 2020, 16, 405–414. [Google Scholar] [CrossRef]
- Wang, K.; Mao, L.; Wu, M.; Wang, K.; Wang, Y. Optimized one-pixel attack algorithm and its defense research. Netw. Secur. Technol. Appl. 2020, 63–66. [Google Scholar] [CrossRef]
- Vargas, D.V.; Kotyan, S. Model agnostic dual quality assessment for adversarial machine learning and an analysis of current neural networks and defenses. arXiv 2019, arXiv:1906.06026. [Google Scholar]
- Kotyan, S.; Vargas, D.V. Adversarial robustness assessment: Why both L0 and L∞ attacks are necessary. PLoS ONE 2022, 17, e0265723. [Google Scholar] [CrossRef]
- Vargas, D.V. One-Pixel Attack: Understanding and improving deep neural networks with evolutionary computation. In Deep Neural Evolution: Deep Learning with Evolutionary Computation; Springer: Berlin/Heiderberg, Germany, 2020; pp. 401–430. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
- Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Krizhevsky, A.; Hinton, G. Learning multiple layers of features from tiny images. In Handbook of Systemic Autoimmune Diseases; Springer: Berlin/Heidelberg, Germany, 2009; Volume 1. [Google Scholar]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Wu, G.; Mallipeddi, R.; Suganthan, P.N.; Wang, R.; Chen, H. Differential evolution with multi-population based ensemble of mutation strategies. Inf. Sci. 2016, 329, 329–345. [Google Scholar] [CrossRef]
- Ni, H.; Peng, C.; Zhou, X.; Yu, L. Differential evolution algorithm with stage-based strategy adaption. Comput. Sci. 2019, 46, 106–110. [Google Scholar] [CrossRef]
- Yang, S.; Sato, Y. Modified bare bones particle swarm optimization with differential evolution for large scale problem. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; IEEE: New York, NY, USA, 2016; pp. 2760–2767. [Google Scholar] [CrossRef]
- Zhang, X.; Tu, Q.; Kang, Q.; Cheng, J. Hybrid optimization algorithm based on grey wolf optimization and differential evolution for function optimization. Comput. Sci. 2017, 44, 93–98. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Defensive distillation is not robust to adversarial examples. arXiv 2016, arXiv:1607.04311. [Google Scholar]
- Carlini, N.; Wagner, D. Magnet and “Efficient defenses against adversarial attacks” are not robust to adversarial examples. arXiv 2017, arXiv:1711.08478. [Google Scholar]
- Athalye, A.; Carlini, N.; Wagner, D. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 274–283. [Google Scholar] [CrossRef]
- Liu, H.; Zhao, B.; Guo, J.; Peng, Y. Survey on adversarial attacks towards deep learning. J. Cryptologic Res. 2021, 8, 202–214. [Google Scholar] [CrossRef]
ResNet | NinN | VGG16 | |
---|---|---|---|
CIFAR10 | 92.31% | 86.07% | 78.28% |
MNIST | 99.15% | 99.01% | 95.82% |
Dataset | Non-Targeted Attack | Targeted Attack | Network | ||||
---|---|---|---|---|---|---|---|
1-Pixel | 3-Pixel | 5-Pixel | 1-Pixel | 3-Pixel | 5-Pixel | ||
CIFAR10 | 40% | 71% | 84% | 16.67% | 40% | 52.22% | ResNet |
44% | 75% | 83% | 16.77% | 42.22% | 52.22% | NinN | |
52% | 77% | 84% | 33.33% | 61.11% | 74.44% | VGG16 | |
MNIST | 4% | 40% | 60% | — | 10% | 27.78% | ResNet |
4% | 26% | 51% | 10% | 26.67% | 34.44% | NinN | |
10% | 37% | 56% | 8.89% | 23.33% | 31.11% | VGG16 |
Method | Number of Modified Pixels | Network | ||
---|---|---|---|---|
1 | 3 | 5 | ||
Ours | 40% | 71% | 84% | ResNet |
JADE | 32.5% | 77.5% | - | ResNet |
PSO | 31% | 59% | 65% | ResNet |
CMA-ES | 12% | 52% | 73% | ResNet |
Ours | 44% | 75% | 83% | NinN |
CMA-ES | 18% | 62% | 81% | NinN |
Dataset | Method | Perturbation | White/Black-Box | Attack Type |
---|---|---|---|---|
CIFAR10 | Ours | 1 (0.10%) | Black-box | Adaptive DE-based |
Ours | 3 (0.29%) | Black-box | Adaptive DE-based | |
Ours | 5 (0.49%) | Black-box | Adaptive DE-based | |
LSA | 38 (3.75%) | Black-box | Greedy search-based | |
DF | 307 (30%) | White-box | Gradient-based | |
FGSM | 1024 (100%) | White-box | Gradient-based | |
MNIST | Ours | 1 (0.13%) | Black-box | Adaptive DE-based |
Ours | 3 (0.38%) | Black-box | Adaptive DE-based | |
Ours | 5 (0.64%) | Black-box | Adaptive DE-based | |
LSA | 18 (2.24%) | Black-box | Greedy search-based | |
JSMA | 32 (4.06%) | White-box | Gradient-based | |
FGSM | 1024 (100%) | White-box | Gradient-based |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, Z.; Peng, C.; Tan, W.; He, X. Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution. Entropy 2023, 25, 487. https://doi.org/10.3390/e25030487
Lin Z, Peng C, Tan W, He X. Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution. Entropy. 2023; 25(3):487. https://doi.org/10.3390/e25030487
Chicago/Turabian StyleLin, Zhiyi, Changgen Peng, Weijie Tan, and Xing He. 2023. "Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution" Entropy 25, no. 3: 487. https://doi.org/10.3390/e25030487