Code can fight systemic racism. This Black History Month, let's rewrite the wrong. Get involved

The AI 360 Toolkit: AI models explained

The capabilities of artificial intelligence (AI) are not hidden. The intelligence of machines is undergoing a major transformation by continuous self-learning improvements. However, these AI models are still a black box, and their decisions are often questioned by the clients. The research is moving faster than ever on improving and optimizing the algorithms, but this alone won’t suffice. The conversations around building trust on AI are often a point of interest for developers, sales, and marketing teams who work directly with the clients. Therefore, it’s important to look into. Imagine owning a computer vision company that builds AI classification models for people in the healthcare industry to diagnose cancer using MRIs, CT scans, and X-rays. It can be difficult for a doctor to rely on the diagnosis suggested by an AI model when a person’s life is involved. Therefore, building trusted AI pipelines has become increasingly important within AI applications.

Three pillars for building trustworthy AI pipelines

  • Fairness: Unfair biases can exist in the data used to train the model as well as in the model’s own decision-making algorithm. Fairness emphasizes the identification and tackling of such biases that are introduced in the data. This ensures that a model’s predictions are fair and do not unethically discriminate.

  • Explainability: Explainability shows how a machine learning model makes its predictions. It gives an improved understanding of the model by clarifying how the model works. It is essential to data scientists for detecting, avoiding, and removing its failure modes; to SMEs and customers for earning public trust in the algorithm; and for introducing effective policies to regulate the technology.

  • Robustness: Robustness measures the stability of the algorithm performance when a model that is deployed in the real world is attacked, and noise is introduced in the training data. It characterizes how effective your algorithm is while being tested on the new independent (but similar) data set. This ensures that the algorithm of the model is able to handle the unseen, perturbed data. It addresses the questions of estimating uncertainties in its predictions and whether the model is robust.

To build trusted AI, IBM Research is developing diverse approaches for how to achieve fairness, robustness, explainability, accountability, and value alignment, and how to integrate them throughout the entire lifecycle of an AI application.

Three open source toolkits by IBM Research

  1. AI Fairness 360 – This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.

  2. AI Explainability 360 – This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. Containing eight state-of-the-art algorithms for interpretable machine learning as well as metrics for explainability, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.

  3. Adversarial Robustness 360 Toolbox — The Adversarial Robustness Toolbox is designed to support researchers and developers in creating novel defense techniques, as well as in deploying practical defense of real-world AI systems. For developers, the library provides interfaces that support the composition of comprehensive defense systems using individual methods as building blocks.

Architecture

Architecture

Flow

  1. Log in to IBM Watson® Studio powered by Spark, initiate IBM Cloud Object Storage, and create a project.
  2. Upload the .csv data file to IBM Cloud Object Storage.
  3. Load the data file in a Watson Studio notebook.
  4. Install the AI Explainability 360 Toolkit, the Adversarial Robustness Toolbox, and AI Fairness 360 in the Watson Studio notebook.
  5. Visualization for explainability and interpretability of the AI model for the three different types of users.

About this series

The series explains the AI 360 toolkit in detail and demonstrates how we use it to solve AI use cases. It explains how we use it to:

  • Build end-to-end AI model pipelines and make them transparent
  • Make the AI models explainable for the outcome and how we ensure that AI models are not biased in their decision making
  • Build robust models that can predict with good accuracy on new data sets without retraining.

This series is for everyone who wants to understand how AI models work and also how to make them explainable to laypeople. It’s widely applicability across multiple domains.

Identify and remove bias from AI models

The Identify and remove bias from AI models code pattern demonstrates how to use the AI Fairness 360 Toolkit to identify and mitigate bias in the AI models. This approach helps businesses to take fair decisions on different aspects. The code pattern uses a fraud prediction data set to demonstrate how the accuracy of AI models is impacted with the bias in the data set and how this toolkit helps to remove the bias to ensure that the decision making is not impacted. This approach of building AI models without bias has wider applicability for the developers and can be used to solve many use cases under different domains.

Unveiling a machine’s fraud prediction decision with AI Explainability 360

The Analyze AI fraud prediction models code pattern highlights the use of the AI Explainability 360 Toolkit to demystify the decisions that are taken by the machine learning model to gain better insights and explainability. This not only helps policymakers and data scientists to develop trusted explainable AI applications, but also helps with transparency for everyone. To demonstrate the use of the AI Explainability 360 Toolkit, we use the existing fraud detection code pattern explaining the AIX360 algorithms. We also guide you on choosing an appropriate explanation method or algorithm depending on the type of customer (for example, data scientist, general public, SME, or policymaker) that needs an explanation of the model. The code pattern demonstrates the use of the Adversarial Robustness 360 Toolbox to defend and evaluate machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference. The code pattern contains a self-explanatory notebook illustrating the following algorithms from the kit:

  • Protodash Explainer: Highlights the profile of similar instances classified as ‘no fraud risk’ to the loan office.

  • Contrastive explanations method (CEM) algorithm using AI Explainability 360 on fraud data: Explains the significant factors influencing the favorable outcome that is to be classified as ‘no fraud risk.’

  • Unveiling fraud detection AI model for data scientist using Boolean rule column generation explainer: The results show the rules using decision trees that are identified by the model to a data scientist or machine learning engineer.

  • Adversarial-Robustness-Toolbox for LightGBM: This notebook shows how to generate the adversarial training data using the Adversarial Robustness Toolbox. This prepares the model against adversarial attacks so that it doesn’t misclassify and is able to distinguish noise from the real data. This step shows how robust the model is for making predictions using new data.

Demonstrate fairness, explainability, and robustness in a single notebook

The Predict an event with fairness, explainability, and robustness code pattern explains how to use AI 360 for creating an end-to-end pipeline for AI models by demonstrating fairness, eliminating bias, making the models explainable, and showcasing the robustness of the models. It uses a binary classification use case to demonstrate all of these features, which help the production-deployed models to seamlessly work without any issues.

Summary

This series helps stakeholders and developers to understand the AI model lifecycle completely and to help them make informed decisions. The black box of AI models is made transparent, bias free, robust, and explainable. The code patterns help developers and machine learning engineers to explore the open source IBM AI 360 Toolkit to solve multiple use cases under different domains.