Fairness is the process of understanding bias in your data. Explainability shows how a machine learning model makes its predictions. Lastly, robustness measures the stability of the algorithm performance.
In this webinar, you will learn how to use a diabetes data set to predict whether a person is more likely to develop diabetesduring their lifetime. You will learn to address three of the pillars of building trustworthy AI pipelines (Fairness, Explainability, and Robustness of the predictive models), and enhance the effectiveness of the AI predictive system.
🎓 What will you learn?
The pillars of building trustworthy AI pipelines
Check fairness of data set using AI 360 Fairness Toolkit
Build the machine learning model
Explain the model using the AI 360 Explainability Toolkit
👩💻 Who should attend
Anyone who is interested in building Machine Learning models
This is a beginner to intermediate session
Log in or sign up for a free IBM Cloud Account: https://ibm.biz/BdfhDt
Register for the live stream or to watch the replay: https://www.crowdcast.io/e/adopt-responsible-ai
Read more about Trustworthy AI here: https://www.ibm.com/watson/trustworthy-ai
Anam Mahmood – Developer Advocate, IBM, https://www.linkedin.com/in/anam-mahmood-sheikh/
Hashim Noor- Client Technical Specialist, IBM, https://www.linkedin.com/in/hashim-noor/
*By registering for this event, you acknowledge this video will be recorded and consent for it to be featured on IBM media platforms and pages.