Abstract
Frequency spectrum has played a significant role in learning unique and discriminating features for object recognition. Both low and high frequency information present in images have been extracted and learnt by a host of representation learning techniques, including deep learning. Inspired by this observation, we introduce a novel class of adversarial attacks, namely ‘WaveTransform’, that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination). The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example. Experiments are performed using multiple databases and CNN models to establish the effectiveness of the proposed WaveTransform attack and analyze the importance of a particular frequency component. The robustness of the proposed attack is also evaluated through its transferability and resiliency against a recent adversarial defense algorithm. Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Original codes provided by the authors are used to perform the experiment.
References
Agarwal, A., Singh, R., Vatsa, M., Ratha, N.: Are image-agnostic universal adversarial perturbations for face recognition difficult to detect? In: IEEE BTAS, pp. 1–7 (2018)
Agarwal, A., Vatsa, M., Singh, R.: The role of sign and direction of gradient on the performance of CNN. In: IEEE CVPRW (2020)
Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Noise is inside me! Generating adversarial perturbations with noise derived from natural filters. In: IEEE CVPRW (2020)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: ICML, pp. 284–293 (2018)
Behzadan, V., Munir, A.: Vulnerability of deep reinforcement learning to policy induction attacks. In: Perner, P. (ed.) MLDM 2017. LNCS (LNAI), vol. 10358, pp. 262–275. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-62416-7_19
Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: a dataset for recognising faces across pose and age. In: IEEE FG, pp. 67–74 (2018)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE S&P, pp. 39–57 (2017)
Carlini, N., Wagner, D.: Audio adversarial examples: targeted attacks on speech-to-text. In: IEEE S&PW, pp. 1–7 (2018)
Chen, P.Y., Sharma, Y., Zhang, H., Yi, J., Hsieh, C.J.: EAD: elastic-net attacks to deep neural networks via adversarial examples. In: AAAI (2018)
Deng, J., Dong, W., Socher, R., Li, L.: ImageNet: a large-scale hierarchical image database. In: IEEE CVPR, pp. 710–719 (2009)
Din, S.U., Akhtar, N., Younis, S., Shafait, F., Mansoor, A., Shafique, M.: Steganographic universal adversarial perturbations. Pattern Recogn. Lett. 135, 146–152 (2020)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: IEEE CVPR, pp. 9185–9193 (2018)
Gao, J., Lanchantin, J., Soffa, M.L., Qi, Y.: Black-box generation of adversarial text sequences to evade deep learning classifiers. In: IEEE S&PW, pp. 50–56 (2018)
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: ICLR (2019)
Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: DeepRing: protecting deep neural network with blockchain. In: IEEE CVPRW (2019)
Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: Securing CNN model and biometric template using blockchain. In: IEEE BTAS, pp. 1–6 (2019)
Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.: DNDNet: reconfiguring CNN for adversarial robustness. In: IEEE CVPRW (2020)
Goel, A., Singh, A., Agarwal, A., Vatsa, M., Singh, R.: SmartBox: benchmarking adversarial detection and mitigation algorithms for face recognition. In: IEEE BTAS (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Goswami, G., Ratha, N., Agarwal, A., Singh, R., Vatsa, M.: Unravelling robustness of deep learning based face recognition against adversarial attacks. In: AAAI, pp. 6829–6836 (2018)
Goswami, G., Agarwal, A., Ratha, N., Singh, R., Vatsa, M.: Detecting and mitigating adversarial perturbations for robust face recognition. Int. J. Comput. Vision 127, 719–742 (2019). https://doi.org/10.1007/s11263-019-01160-w
Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S.: Multi-pie. I&V Comp. 28(5), 807–813 (2010)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE CVPR, pp. 4700–4708 (2017)
Kiefer, J., Wolfowitz, J., et al.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR-W (2017)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial machine learning at scale. In: ICLR (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Matthew, D., Fergus, R.: Visualizing and understanding convolutional neural networks. In: ECCV, pp. 6–12 (2014)
Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: IEEE CVPR, pp. 1765–1773 (2017)
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: IEEE CVPR, pp. 2574–2582 (2016)
Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 1–15 (2020)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)
Singh, R., Agarwal, A., Singh, M., Nagpal, S., Vatsa, M.: On the robustness of face recognition algorithms against attacks and bias. In: AAAI SMT (2020)
Szegedy, C., et al.: Going deeper with convolutions. In: IEEE CVPR, pp. 1–9 (2015)
Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: IEEE/CVF CVPR, pp. 8684–8694 (2020)
Xiang, C., Qi, C.R., Li, B.: Generating 3D adversarial point clouds. In: IEEE CVPR, pp. 9136–9144 (2019)
Xiao, C., Li, B., Zhu, J.Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610 (2018)
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017)
Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: IEEE ICCV, pp. 1369–1378 (2017)
Yahya, Z., Hassan, M., Younis, S., Shafique, M.: Probabilistic analysis of targeted attacks using transform-domain adversarial examples. IEEE Access 8, 33855–33869 (2020)
Yao, L., Miller, J.: Tiny imagenet classification with convolutional neural networks. CS 231N 2(5), 8 (2015)
Yi, D., Lei, Z., Liao, S., Li, S.Z.: Learning face representation from scratch. arXiv preprint arXiv:1411.7923 (2014)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE TNNLS 30(9), 2805–2824 (2019)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML (2019)
Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)
Acknowledgements
A. Agarwal was partly supported by the Visvesvaraya PhD Fellowship. R. Singh and M. Vatsa are partially supported through a research grant from MHA, India. M. Vatsa is also partially supported through Swarnajayanti Fellowship by the Government of India.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Anshumaan, D., Agarwal, A., Vatsa, M., Singh, R. (2020). WaveTransform: Crafting Adversarial Examples via Input Decomposition. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12535. Springer, Cham. https://doi.org/10.1007/978-3-030-66415-2_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-66415-2_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-66414-5
Online ISBN: 978-3-030-66415-2
eBook Packages: Computer ScienceComputer Science (R0)