Few people care if the AI in a program that draws cats is ethical. However, when AI is used in regulated industries such as medicine, law enforcement, recruiting, data privacy, military defense, or self-driving vehicles, the AI must produce transparent and understandable results that reflect the ethical standards and norms of our society.
To address these issues, we look at the following metrics while evaluating machine learning models: Fairness to understand the bias in your data or model, Explainability to show how a machine learning model makes its predictions, and Robustness which measures the stability of the model’s performance.
In this webinar, you will learn how to use a diabetes dataset to predict whether a person is prone to have diabetes, and we’ll evaluate this model for its trustworthiness. You will learn the three pillars of building trustworthy AI pipelines, fairness, explainability, and robustness of the predictive models, and how this enhances the effectiveness of the ethical AI predictive system.
🎓 What will you learn?
- The pillars of building trustworthy AI pipelines
- Check fairness of data set using AI 360 Fairness Toolkit
- Develop your machine learning model
- Explain the model using the AI 360 Explainability Toolkit
👩💻 Who should attend?
Anyone who is interested in building Machine Learning models
- Sign up for your IBM Cloud account: https://ibm.biz/Bdfs2Z
- Register for the live stream and access the replay: https://www.crowdcast.io/e/ai-fairness-360
- Anam Mahmood – Cloud Developer Advocate, IBM, https://www.linkedin.com/in/anam-mahmood-sheikh/
- Hashim Noor – Client Technical Specialist, IBM, https://www.linkedin.com/in/hashim-noor/
By registering for this event, you acknowledge that this video will be recorded and you consent for it to be featured on IBM media platforms and pages.