Think 2021: New tools have the developer ecosystem and IBM building together Learn more

IBM Code Bristol meetup @ EngineShed – Evaluating performance, fairness and robustness of models in production

December 4, 2019 6:30 pm GMT

Hands on workshop for developers by DEG UKI developer advocates

To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.

In this workshop we will use Watson OpenScale, which is build with trusted AI open-source projects. Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable and compliant wherever your models were built or are running.

We will walk through the process of deploying a credit risk model and then monitoring the model to explore the different aspects of trusted AI.

By the end of the lab, you will have:
– Deployed a model from development to a runtime environment.
– Monitored the performance (operational) of the model over time.
– Tracked the model quality (accuracy metrics) over time.
– Identified and explored the fairness of the model as it’s receiving new data.
-Understood how the model arrived at its predictions.
– Tracked the robustness of the model.

This is a hands-on session so please bring a laptop along!

Margriet – Developer Advocate (@margrietGr)

18.00 – 18.30: Registration, Food, Drinks and Networking
18.30 – 19.00: Introduction to robust, unbiased and reproducible AI
19.00 – 20.30: Hands on workshop

Engineshed, Bristol, United Kingdom