Code can fight systemic racism. This Black History Month, let's rewrite the wrong. Get involved

Predict an event with fairness, explainability, and robustness


In this code pattern, learn how to use a diabetes data set to predict whether a person is prone to have diabetes. The code pattern explores fairness, explainability, and robustness of the predictive models, and enhances the effectiveness of the AI predictive system. The code pattern demonstrates the end-to-end solution and shows how to:

  • Check the fairness of the diabetes data set using the AI 360 Fairness Toolkit
  • Develop the model
  • Explain the model using the AI 360 Explainability Toolkit

The code pattern shares the generic code template for the entire end-to-end process of the previous steps. Therefore, it can be used to plug in any data set for which you want to explore the fairness and explainability.


Fairness is the process of understanding bias introduced by your data, and ensuring that your model provides equitable predictions across all demographic groups. Explainability shows how a machine learning model makes its predictions. It gives an improved understanding of the model by clarifying how the model works.

In this code pattern, you use a diabetes data set to predict whether a person is prone to have diabetes. You use IBM Watson® Studio, IBM Cloud Object Storage, the AI Explainability 360 Toolkit, and the AI Fairness 360 Toolkit to create the data, apply the bias mitigation algorithm, then analyze the results.

After completing this code pattern, you understand how to:

  • Create a project using Watson Studio
  • Use the AI Explainability 360 Toolkit
  • Use the AI Fairness 360 Toolkit


Predict an event with fairness flow

  1. Log in to IBM Watson Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
  2. Upload the .csv data file to IBM Cloud Object Storage.
  3. Load the data file in the Watson Studio notebook.
  4. Install the AI Explainability 360 Toolkit and the AI Fairness 360 Toolkit in the Watson Studio notebook.
  5. Analyze the results after applying the bias mitigation algorithm during pre-processing, in-processing, and post-processing stages.


Find the detailed steps for this pattern in the readme file. The steps will show you how to:

  1. Create an account with IBM Cloud.
  2. Create a new Watson Studio project.
  3. Add data.
  4. Create the notebook.
  5. Insert the data as DataFrame.
  6. Run the notebook.
  7. Analyze the results.

This code pattern is part of the The AI 360 Toolkit: AI models explained use case series, which helps stakeholders and developers to understand the AI model lifecycle completely and to help them make informed decisions.