Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline?
In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360.
🌟 AI Fairness 360 (AIF360, https://aif360.mybluemix.net/) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
In this meetup you’ll learn:
✔️How to measure bias in your data sets & models
✔️How to apply the fairness algorithms to reduce bias
✔️How to apply a practical use case of bias measurement & mitigation
Sr. AI Tech Evangelist IBM
Trisha Mahoney is an AI Tech Evangelist for IBM with a focus on Fairness & Bias. Trisha has spent the last 10 years working on Artificial Intelligence and Cloud solutions at several Bay Area tech firms including (Salesforce, IBM, Cisco). Prior to that, Trisha spent 8 years working as a data scientist in the chemical detection space. She holds an Electrical Engineering degree and an MBA in Technology Management.