Virtual: How to trust ML model with Openscale – Crowdcast


October 27, 2020 11:00 am EET

🌟 Overview

In this session we will cover a great tool on IBM Cloud called Watson OpenScale which will help us to gain trust on our machine learning model.
Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production.

More information about Watson OpenScale can be found here:

πŸŽ“ What will you learn?

πŸ”₯ In this tutorial, you’ll see how IBM Watson OpenScale can be used to monitor your artificial intelligence (AI) models for fairness and accuracy. You’ll get a hands-on look at how Watson OpenScale will automatically generate a debiased model endpoint to mitigate your fairness issues and provides an explainability view to help you understand how your model makes its predictions. In addition, you’ll see how Watson OpenScale uses drift detection. Drift detection will tell you when runtime data is inconsistent with your training data or if there is an increase the data that is likely to lead to lower accuracy.

πŸ‘©‍πŸ’» Who should attend?

All tech guys (Developers / Data Scientists …) are welcome to attend the webinar!

πŸ‘©‍🏫 Prerequisites

☁ Register for a free IBM Cloud Account:
prior to the event to get the most out of our workshop.

πŸŽ™οΈ Speaker

Tal Neeman, IBM Developer Advocate