Abstract
Adversarial patch is an image-independent patch that misleads deep neural networks to output a targeted class. Existing defense strategies mainly rely on patch detection based on the frequency or semantic gaps between the patch and clean image. But we found that they are effective because the gap is huge. This is because existing patch attacks only look for an effective patch instead of the optimized patch that minimizes the gap. We then propose two improved patches, enhanced and smoothed patches, to reduce the gap. Consequently, the decision boundary for adversarial examples of the existing defense means is successfully obscured. To cope with the improved patches, we propose a defense method based on image preprocessing. We leverage multi-scale Gaussian blur to amplify the reduced gap between the patch and clean image. Due to the dense information of patches, for a patch, the dissimilarities of Gaussian blurs with different scales are higher than that of clean images. By enhancing the local multi-scale details and weakening them in another scale set, we maximize its effect on patch with high-frequency information. In this way, our defense method can efficiently distort adversarial patches and cause only a negligible impact on clean images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Brown, T.B., Mane, D., Roy, A., Abadi, M., Gilmer, J.: Adversarial patch. In: Neural Information Processing Systems (NIPS) (2017)
Karmon, D., Zoran, D., Goldberg, Y.: Lavan: localized and visible adversarial noise. In: International Conference on Machine Learning (ICML) (2018)
Liu, A., Wang, J., Liu, X., Cao, B., Zhang, C., Yu, H.: Bias-based universal adversarial patch attack for automatic check-out. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12358, pp. 395–410. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58601-0_24
Duan, R., Ma, X., Wang, Y., Bailey, J., Qin, A.K., Yang, Y.: Adversarial Camouflflage: hiding physical-world attacks with natural styles. In: CVPR (2020)
Naseer, M., Khan, S., Porikli, F.: Local gradients smoothing: defense against localized adversarial attacks. In: Proceedings Of WACV, pp. 1300–1307 (2019)
Xu, Z., Yu, F., Chen, X., LanCe: a comprehensive and lightweight CNN defense methodology against physical adversarial attacks on embedded multimedia applications. In: Asia and South Pacific Design Automation Conference (ASP-DAC) (2020)
Hayes, J.: On visible adversarial perturbations and digital watermarking. In: Proceedings of CVPR Workshops, pp. 1597–1604 (2018)
Kim, Y., Koh, Y.J., Lee, C., Kim, S., Kim, C.S.: Dark image enhancement based on pairwise target contrast and multi-scale detail boosting. In: IEEE International Conference on Image Processing, pp. 1404–1408 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS) (2012)
Mohamed, A.R., Dahl, G.E., Hinton, G.: Acoustic modeling using deep belief networks. In: IEEE T Audio Speech (2011)
Sutskever, I., Vinyals, O., Le, Q.: Sequence to sequence learning with neural networks. In: Neural Information Processing Systems (NIPS) (2014)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv: 1607.02533 (2016)
Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR (2016)
Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: ICCV (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Fu, Y., Zheng, X., Du, P., Liu, L. (2021). Analysis and Countermeasure Design on Adversarial Patch Attacks. In: Cui, L., Xie, X. (eds) Wireless Sensor Networks. CWSN 2021. Communications in Computer and Information Science, vol 1509. Springer, Singapore. https://doi.org/10.1007/978-981-16-8174-5_14
Download citation
DOI: https://doi.org/10.1007/978-981-16-8174-5_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-8173-8
Online ISBN: 978-981-16-8174-5
eBook Packages: Computer ScienceComputer Science (R0)