Abstract
Detecting adversarial examples and rejecting to input them into a CNN classifier is a crucial defense method to prevent the CNN being fooled by the adversarial examples. Considering that attackers usually utilize down-sampling to match the input size of CNN and the detection methods are commonly evaluated on down-sampled images, we study how the detectability of adversarial examples is affected by the interpolation algorithm if the legitimate image is down-sampled prior to be attacked. Since the down-sampling changes the relationships among neighboring pixels, the steganalysis-based detectors relying on the neighborhood dependencies are probably affected sharply. Experimental results on ImageNet verify that the detection accuracy varies among different interpolation kernels dramatically (the largest difference of accuracy is up to about 9%), and such novel phenomena appear valid universally across the tested CNN models and attack algorithms for the steganalysis-based detection method. Our work is of interest to both attackers and defenders for the purpose of benchmarking the attack algorithm and detection method respectively.
This work was partially supported by NSFC (No. 41865006), Sichuan Science and Technology Program (No. 2022YFG0321, 2022NSFSC0916).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
References
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Machine Learning, pp. 1–10 (2015)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations. arXiv:1706.06083 (2018)
Moosavi-Dezfooli, S.M., Fawzi, A. and Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)
Zhang, H., Avrithis, Y., Furon, T., Amsaleg, L.: Walking on the edge: fast, low-distortion adversarial examples. IEEE Trans. Inf. Forensics Secur. 16, 701–713 (2020)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy, pp. 39–57 (2017)
Bourzac, K.: Bringing big neural networks to self-driving cars, smartphones, and drones. IEEE Spectrum, 13–29 (2016)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Wong, E., Rice, L., Kolter, J., Z.: Fast is better than free: Revisiting adversarial training. arXiv:2001.03994 (2020)
Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482 (2019)
Machado, G.R., Silva, E., Goldschmidt, R.R.: Adversarial machine learning in image classification: a survey toward the defender’s perspective. ACM Comput. Surv. (CSUR) 55(1), 1–38 (2021)
Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P.: On the (statistical) detection of adversarial examples. arXiv:1702.06280 (2017)
Lu, J., Issaranon, T., Forsyth, D.: Safetynet: detecting and rejecting adversarial examples robustly. In: Proceedings of the IEEE International Conference On Computer Vision, pp. 446–454 (2017)
Li, X. and Li, F.: Adversarial examples detection in deep networks with convolutional filter statistics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5764–5772 (2017)
Liang, B., Li, H., Su, M., Li, X., Shi, W., Wang, X.: Detecting adversarial image examples in deep neural networks with adaptive noise reduction. IEEE Trans. Dependable Secur. Comput. 18(1), 72–85 (2018)
Xu, W., Evans, D. and Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. In: Network and Distributed System Security Symposium. arXiv:1704.01155 (2017)
Guo, C., Rana, M., Cisse, M., Van Der Maaten, L.: Countering adversarial images using input transformations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. arXiv:1711.00117 (2017)
Schöttle, P., Schlögl, A., Pasquini, C., Böhme, R.: Detecting adversarial examples-a lesson from multimedia security. In: 2018 26th European Signal Processing Conference (EUSIPCO), pp. 947–951 (2018)
Fan, W., Sun, G., Su, Y., Liu, Z., Lu, X.: Integration of statistical detector and Gaussian noise injection detector for adversarial example detection in deep neural networks. Multimed. Tools Appl. 78(14), 20409–20429 (2019). https://doi.org/10.1007/s11042-019-7353-6
Liu, J., et al.: Detection based defense against adversarial examples from the steganalysis point of view. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4825–4834 (2019)
Bonnet, B., Furon, T., Bas, P.: Forensics through stega glasses: the case of adversarial images. In: International Conference on Pattern Recognition, pp. 453–469 (2021)
Peng, A., Deng, K., Zhang, J., Luo, S., Zeng, H., Yu, W.: Gradient-based adversarial image forensics. In: International Conference on Neural Information Processing, pp. 417–428 (2020)
He, K., Zhang, X., Ren, S. and Sun, J.: Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kodovský, J., Fridrich, J.: Steganalysis in resized images. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2857–2861 (2013)
Kodovský, J., Fridrich, J.: Effect of image downsampling on steganographic security. IEEE Trans. Inf. Forensics Secur. 9(5), 752–762 (2014)
Stamm, M.C., Wu, M. and Liu, K.R.: Information forensics: An overview of the first decade. IEEE access. 1, 167–200 (2013). (Kang, X., Stamm, M.C., Peng, A. and Liu, K.R.: Robust median filtering forensics using an autoregressive model. IEEE Trans. Inf. Forensics Secur. 8(9), pp. 1456–1468 (2013))
Kang, X., Stamm, M.C., Peng, A., Liu, K.R.: Robust median filtering forensics using an autoregressive model. IEEE Trans. Inf. Forensics Secur. 8(9), 1456–1468 (2013)
Kodovsky, J., Fridrich, J., Holub, V.: Ensemble classifiers for steganalysis of digital media. IEEE Trans. Inf. Forensics Secur. 7(2), 432–444 (2011). (Dong, Y., et al.: Benchmarking adversarial robustness on image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321–331 (2020))
Dong, Y., et al.: Benchmarking adversarial robustness on image classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321–331 (2020)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Fridrich, J., Kodovsky, J.: Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 7(3), 868–882 (2012)
Mustafa, A., Khan, S.H., Hayat, M., Shen, J., Shao, L.: Image super-resolution as a defense against adversarial attacks. IEEE Trans. Image Process. 29, 1711–1724 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Peng, A. et al. (2023). Effect of Image Down-sampling on Detection of Adversarial Examples. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Communications in Computer and Information Science, vol 1791. Springer, Singapore. https://doi.org/10.1007/978-981-99-1639-9_46
Download citation
DOI: https://doi.org/10.1007/978-981-99-1639-9_46
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1638-2
Online ISBN: 978-981-99-1639-9
eBook Packages: Computer ScienceComputer Science (R0)