Get the code
by Animesh Singh, Anupama Murthy, Christian Kadner | Published June 25, 2018
Artificial intelligenceContainersData sciencePython
Adversarial attacks pose a real threat to the deployment of AI systems in security-critical applications, and they present an asymmetrical challenge with respect to attackers and defenders. An attacker’s reward is a successful attack that doesn’t raise suspicion, while a defender wants to develop strategies that can guard against all known attacks and ideally for all possible inputs. This code pattern explains how to use a Jupyter Notebook to integrate the Fast Gradient Method (FGM) from the Adversarial Robustness Toolbox (ART) into a model training pipeline leveraging Fabric for Deep Learning (FfDL). The generated adversarial samples are then used to evaluate the robustness of the trained model.
Evaluating the robustness of machine learning models against adversarial attacks is becoming an integral step in machine learning pipelines. The Adversarial Robustness Toolbox (ART) is a library that is dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attack and defense methods for machine learning models with implementations for many state-of-the-art methods for attacking and defending classifiers.
Fabric for Deep Learning (FfDL, pronounced “fiddle”) provides a consistent way to run deep learning frameworks such as TensorFlow, PyTorch, Caffe, and Keras as a service on Kubernetes, hiding a lot of the complexities of setting up distributed deep learning training environments. Training machine learning models is also a very iterative process, and this is especially true when incorporating techniques for evaluating and hardening models against attacks by incorporating adversarial samples into the training data set. Jupyter Notebooks are a very popular tool for data scientists because they allow for interactive programming in a web application.
In this code pattern, we use a Jupyter Notebook with Python and Bash shell magics to launch training jobs on FfDL and the Adversarial Robustness Toolbox to detect model vulnerabilities. We explain how training jobs can be configured and started as well as how to follow running training jobs. We use the Keras and TensorFlow deep learning frameworks, and the Boto3 Python SDK to interact with an S3 cloud object storage instance that is required to store the training data and the trained model. From the Adversarial Robustness Toolbox, we run the Fast Gradient Method (FGM) to craft adversarial samples and generate metrics about the robustness of the trained model.
Ready to put this code pattern to use? Complete details on how to get started running and using this application are in the README.
March 14, 2019
July 15, 2019
Adversarial Robustness ToolboxArtificial intelligence+
May 6, 2019
Back to top