Learn more >
Get the code
Published August 8, 2019
Artificial intelligenceMachine learning
AI Explainability 360 is a comprehensive toolkit that offers a unified API to bring together:
The initial release of AI Explainability 360 contains eight different algorithms, created by IBM Research. We invite the broader research community to contribute their own algorithms.
Algorithms include methods for understanding both data and models. Some algorithms directly learn models that people can interpret and understand. Other algorithms first train an inscrutable black box model and then explain it afterwards with another model. Some algorithms explain decisions on individual samples; whereas, other algorithms explain the entire model at the same time.
What is black box ML?
Black box machine learning models are complicated models that consumers are not easily able to understand, such as a deep neural network
See our glossary of AI terms
Black box machine learning models that cannot be understood by people — such as deep neural networks and large ensembles — are achieving impressive accuracy on various tasks and gaining widespread adoption. As they grow in popularity, explainability and interpretability, which permit human understanding of the machine’s decision-making process, are becoming more essential. In fact, according to an IBM Institute of Business Value survey, 68% of the gobal executives surveyed believe that customers will demand more explainability in the next three years.
Explainability is not a singular approach. There are many ways to explain how machine learning makes predictions, including:
The appropriate choice depends on the persona of the consumer and the requirements of the machine learning pipeline. AI Explainability 360 differs from other open source explainability offerings through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework.
Let’s look at two algorithms as an example.
The AI Explainability 360 toolkit interactive experience provides a gentle introduction to fairness concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API documentation is also available. As a comprehensive set of capabilities, it may be confusing to figure out which algorithms are most appropriate for a given use case. To help, we created some guidance material that you can consult.
As an open source project, the AI Explainability 360 toolkit benefits from a vibrant ecosystem of contributors, both from the technology industry and academia.
This is the first explainability toolbox that gives you a unified API coupled with industry-relevant policy specifications and tutorials to tackle all the different methods of explaining. Bringing together the top explainability algorithms and metrics in the field will help accelerate both the scientific advancement of the field and adoption of the techniques in real-world deployments.
We encourage you to contribute your metrics and algorithms. Please join the community and get started as a contributor.
Read our contribution guidelines to get started.
Take a look at the AI Explainability 360 toolkit, a collection of algorithms that can help explain AI and machine…
Artificial intelligenceDeep learning+
The AI Fairness 360 toolkit (AIF360) is an open source software toolkit that can help detect and remove bias in…
Get the Code »
The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
Back to top