A conference devoted to making the most in a world that’s DATA DRIVEN


Adversarial Robustness Toolbox – Defending AI against adversarial threats

In this workshop, we will take a deep-dive into an emerging topic in the AI community: Robustness of AI models against so-called adversarial samples, which are inputs created by an adversary in order to corrupt the model outputs. We will look at: – Adversarial samples on image classifiers (Deep Neural Networks), and also on conventional machine learning models, such as Support Vector Machines or Decision Trees. – Metrics for quantifying the robustness resp. the vulnerability of AI models against adversarial samples. – Strategies for defending against adversarial samples, such as input preprocessing, adversarial training, or runtime detection of adversarial samples. The workshop will comprise interactive exercises, mostly in Jupyter notebooks. We will make use of the Adversarial Robustness Toolbox (https://github.com/IBM/adversarial-robustness-toolbox), an open-source project that is designed to enable quick prototyping of experiments with adversarial samples.

Required skills: Python (intermediate), AI/Machine Learning (beginner) Software to prepare: - Python 3.5 later - Jupyter Notebooks - Installation of the v1.0.0 release of the Adversarial Robustness Toolbox (https://github.com/IBM/adversarial-robustness-toolbox)
coffee break included
Edition 2019