Adversarial Robustness Toolbox (ART) – Evasion, Poisoning, Extraction and Inference

Adversarial Robustness Toolbox (ART) is a Python library for machine learning (ML) security for defending, certifying and verifying ML models against the adversarial threats of evasion, poisoning, extraction and inference. Beat will discuss existing real-world threat scenarios against ML applications, followed by an overview of ART and the recent release of ART 1.3 with a discussion of its tools for evaluating robustness against evasion attempts to change a model’s decisions, poisoning data and models to introduce back doors to control models, stealing models by model extraction through queries, and violation of privacy by inferring private data from models. He will walk step-by-step trough an adaptive white-box evasion attack against a ML model with ART code examples, explains attempts to defend the ML model against the attack and demonstrates how aware attackers adapt to defences to reach their goals.