10000 GitHub - jdnklau/adversarial-attack-example: Notebook summary of an adversarial attack on two CIFAR-10 classifiers
[go: up one dir, main page]

Skip to content

Notebook summary of an adversarial attack on two CIFAR-10 classifiers

Notifications You must be signed in to change notification settings

jdnklau/adversarial-attack-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial attack onto a neural network

This repository contains an exemplary execution of an adversarial attack onto two Cifar-10 CNNs.

We attack a pretrained ResNet-20 via the DeepFool method by Moosavi-Dezfooli et al. (2015). As DeepFool is a whitebox evasion attack, meaning it needs access to the underlying classifier, we generate a perturbed dataset over the ResNet-20 and then use it to attack an unrelated model, a pretrained VGG16 architecture. This way, as the VGG16 model was not involved during the generation of the adversarial dataset, we imposed a blackbox attack onto it.

Dependencies

The code runs with Python 3.10.12. To (re)run the notebook (attack.ipynb), we advise to set up a new virtual environment (see below) and use it as the notebook's kernel.

python3.10 -m venv env
. env/bin/activate
pip install -r requirements.txt

# Set Jupyter kernel
python -m ipykernel install --name=adv-attack-env

Pretrained models

The pretrained Cifar-10 models are taken from https://github.com/chenyaofo/pytorch-cifar-models and are loaded via torch.hub.

About

Notebook summary of an adversarial attack on two CIFAR-10 classifiers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0