2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

The AI Fairness 360 toolkit is becoming even more accessible for a wider range of developers

The field of artificial intelligence (AI) is making progress as it looks to improve industries and society. But, while the technology continues advancing, the idea of “build for performance” will no longer suffice as an AI design paradigm. We are now in an era where AI must be built, evaluated, and monitored for trust.

IBM® continues to serve as an industry leader in advancing what we call Trusted AI, focused on developing diverse approaches that implement elements of fairness, explainability, and accountability across the entire lifecycle of an AI application.

The IBM AI Fairness 360 toolkit

Under our Trusted AI efforts, IBM released in 2018 the AI Fairness 360 toolkit (AIF360), which is an extensible, open source toolkit that can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It contains over 70 fairness metrics and 11 state-of-the-art bias mitigation algorithms developed by the research community, and it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.

Adding new functionalities to AI Fairness 360

Now, IBM is adding two new ways in which AIF360 is becoming even more accessible for a wider range of developers, as well as increased functionality: compatibility with scikit-learn and R.

R users can now use the AI Fairness 360 toolkit

AI fairness is an important topic as machine learning models are increasingly used for high-stakes decisions. Machine learning discovers and generalizes patterns in the data and therefore, could replicate systematic advantages of privileged groups. To ensure fairness, we must analyze and address any cognitive bias that might be present in our training data or models.

We are pleased to announce the release of the AI Fairness 360 R package, an open source library containing techniques to help detect and mitigate bias in data sets and machine learning models throughout the AI application lifecycle. Read “The AIF360 fairness toolkit is now available for R users” for details.

AI Fairness 360 now has compatibility with scikit-learn

The scikit-learn data science library is enormously useful for training established machine learning algorithms, computing basic metrics, and building model pipelines. In fact, many example notebooks in AI Fairness 360 already use scikit-learn classifiers with pre-processing or post-processing workflows. However, switching between the AI Fairness 360 toolkit algorithms and scikit-learn algorithms breaks the workflow and forces you to convert data structures back and forth. You’re also unable to use some of the powerful meta-programming tools from scikit-learn like pipelines and cross validation.

So, the AIF360 team added a new aif360.sklearn module to the latest version release of AIF360, 0.3.0. This module is where you can find all of the currently completed scikit-learn-compatible AIF360 functions. Get all of the information about this update in “The AIF360 team adds compatibility with scikit-learn.”

Moving trusted AI forward

Users can use AIF360 to detect fairness issues at training time from Watson Studio and use Watson® OpenScale™ to detect fairness at runtime. Watson OpenScale already supports some of the AIF360 metrics, and longer term we are working on integrating more AIF360 metrics into Watson OpenScale at both design time as well as runtime. AIF360 will also be made available through the IBM Cloud Pak for Data Open Source catalog.