AI Fairness 360: Attacking bias from all angles
Take a look at AI Fairness 360, an open source library to help detect and remove bias in machine learning models and data sets.
With the power of machine learning, artificial intelligence (AI) has been making decisions for us – from recommendation systems that can personalize our habits to credit scores that rank us based on our behaviors. As AI becomes more common and powerful to make critical decisions such as criminal justice and hiring, there’s a growing demand that wants AI to be fair, transparent, and accountable for everyone.
Underrepresentation of the data sets and misinterpretation of the data can lead to major flaws and bias that are critical for decision making in many industries. These flaws and bias might not be easy to detect without the right tool. At IBM, we are deeply committed to delivering services that are unbiased, explainable, value aligned, and transparent. And to back up that commitment, we are pleased to announce the launch of AI Fairness 360, an open source library to help detect and remove bias in machine learning models and data sets.
The AI Fairness 360 Python package includes a comprehensive set of metrics for data sets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in data sets and models. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.
To understand the motivation and the research efforts behind this launch, please refer to this blog.
In this blog, we are going to walk through different ways of using this capability.
1. Using the AIF360 open source toolkit.
The easiest way to get started with the AIF 360 library itself is to use the ‘pip install’ command.
pip install aif360
Or clone the code and run
pip install from within the folder.
git clone https://github.com/IBM/AIF360 pip install aif360 pip install -r requirements.txt
Then, you can get started with the open source tutorials. The AIF360 open source directory contains a diverse collection of Jupyter Notebooks that can be used in various ways.
2. Using the hosted AIF360 web application
By using our hosted web application, you can choose a sample data set and associated demos. Bias occurs in data used to train a model. We have provided three sample data sets that you can use to explore bias checking and mitigation. Each data set contains attributes that should be protected to avoid bias. For example, running this toolkit on ‘Adult census income’ with default thresholds, bias against unprivileged groups (non-white or female) is detected in some metrics.
3. Using AIF360 IBM code pattern
To simplify the development process and streamline the search for free, open source code, IBM has created code patterns. These code patterns do the dirty work for the developer. They are curated packages of code, one-click GitHub repos, documentation, and resources that address some of the most popular areas of development, including AI, blockchain, containers, and IoT. For example, let’s say that you want to create a chatbot for any industry that has a Slack front end and a transactional back end, which is a common design pattern today. By using an IBM code pattern, you can start at the point of a Slack front end and a transactional back end and focus on your application – and not what it takes to stand it up and make it work.
As part of our many Artificial Intelligence and Data Analytics code patterns, we have created a code pattern to get started with AIF 360. This pattern guides you how to launch a Jupyter Notebook locally or in IBM Cloud and be able to use it to run AIF360. In short:
- You start a Jupyter Notebook (either locally or on Watson Studio)
- The Notebook imports the AIF360 toolkit
- Data is loaded into the Notebook.
- You run the Notebook, which uses the AIF360 toolkit to assess the fairness of a machine learning model.
In addition, we will have many other code patterns on AI Fairness 360 coming soon!
Get started! Free your AI systems from all biases!
The AI Fairness 360 toolkit, in addition to the Adversarial Robustness Toolbox (ART), Fabric for Deep learning (FfDL), and Model Asset Exchange (MAX) are available on GitHub to deploy, use, and extend. There are additional code patterns around all these open source projects, so get started today!
- Deploy and use a multi-framework deep learning platform on Kubernetes
- Integrate adversarial attacks into a model training pipeline
- Leverage Tensorflow and Fabric for Deep Learning to train and deploy Fashion MNIST model
- Create a web app to visually interact with objects detected using machine learning
We are looking forward to your feedback! Join us to free our next generation AI systems of any inherent biases, and create trusted and transparent AI pipelines!