Foolbox is an adversarial attacks library that works natively with PyTorch, TensorFlow and JAX


Keywords
adversarial-attacks, adversarial-examples, jax, keras, machine-learning, python, pytorch, tensorflow
License
MIT
Install
pip install foolbox==1.6.0

Documentation

https://readthedocs.org/projects/foolbox/badge/?version=latest

Foolbox: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Foolbox is a Python library that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch, TensorFlow, and JAX.

πŸ”₯ Design

Foolbox 3 has been rewritten from scratch using EagerPy instead of NumPy to achieve native performance on models developed in PyTorch, TensorFlow and JAX, all with one code base without code duplication.

  • Native Performance: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.
  • State-of-the-art attacks: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.
  • Type Checking: Catch bugs before running your code thanks to extensive type annotations in Foolbox.

πŸ“– Documentation

  • Guide: The best place to get started with Foolbox is the official guide.
  • Tutorial: If you are looking for a tutorial, check out this Jupyter notebook colab .
  • Documentation: The API documentation can be found on ReadTheDocs.

πŸš€ Quickstart

pip install foolbox

Foolbox is tested with Python 3.8 and newer - however, it will most likely also work with version 3.6 - 3.8. To use it with PyTorch, TensorFlow, or JAX, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.

You can see the versions we currently use for testing in the Compatibility section below, but newer versions are in general expected to work.

πŸŽ‰ Example

import foolbox as fb

model = ...
fmodel = fb.PyTorchModel(model, bounds=(0, 1))

attack = fb.attacks.LinfPGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)

More examples can be found in the examples folder, e.g. a full ResNet-18 example.

πŸ“„ Citation

If you use Foolbox for your work, please cite our JOSS paper on Foolbox Native (i.e., Foolbox 3.0) and our ICML workshop paper on Foolbox using the following BibTeX entries:

@article{rauber2017foolboxnative,
  doi = {10.21105/joss.02607},
  url = {https://doi.org/10.21105/joss.02607},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2607},
  author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel},
  title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX},
  journal = {Journal of Open Source Software}
}
@inproceedings{rauber2017foolbox,
  title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},
  author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},
  booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},
  year={2017},
  url={http://arxiv.org/abs/1707.04131},
}

πŸ‘ Contributions

We welcome contributions of all kind, please have a look at our development guidelines. In particular, you are invited to contribute new adversarial attacks. If you would like to help, you can also have a look at the issues that are marked with contributions welcome.

πŸ’‘ Questions?

If you have a question or need help, feel free to open an issue on GitHub. Once GitHub Discussions becomes publicly available, we will switch to that.

πŸ’¨ Performance

Foolbox 3.0 is much faster than Foolbox 1 and 2. A basic performance comparison can be found in the performance folder.

🐍 Compatibility

We currently test with the following versions:

  • PyTorch 1.10.1
  • TensorFlow 2.6.3
  • JAX 0.2.517
  • NumPy 1.18.1