Security and privacy are always top priorities for developers.
In this Think 2021 Master Class, learn about the Adversarial Robustness Toolbox, an open source software library that helps developers in defending deep neural networks against adversarial attacks, making AI systems more secure. Join IBM researcher Beat Buesser as he examines this library and other tools to apply the principles of robustness, privacy, and security of the (R)REPEATS Trusted AI Principles.
Introduction to (R)REPEATS and the Adversarial Robustness Toolbox
Get an introduction to the Adversarial Robustness Toolbox as well as the developers behind it, and learn about the (R)REPEATS Principles from the LF AI & Data Foundation, which provide reproducibility, robustness, equitability, privacy, explainability, accountability, transparency, and security.
Example use of Adversarial Robustness Toolbox components
Using the case of an attack through evasion, understand how the Adversarial Robustness Toolbox sees the code used to determine whether there has been an attack on a machine learning model.
Complete example walkthrough
Using the case of an attack on the estimator of classification, understand how the Adversarial Robustness Toolbox sees the code used to determine whether a classification is failing because an image has been rotated.
Q&A and wrap-up
Questions and answers from the Think Master Class session on the Adversarial Robustness Toolbox including an overview of the toolbox files in the open source repository.
Summary and next steps
This Think 2021 Master Class session provided an overview of the Adversarial Robustness Toolbox, which developers can use to defend and evaluate machine learning models against the threats of evasion, poisoning, extraction, and inference. Additionally, it profiled information on the principles of robustness, privacy, and security of the (R)REPEATS Trusted AI Principles.
Get more information about the Adversarial Robustness Toolbox on IBM Developer.