IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

The AIF360 team is thrilled to announce a major update to our fairness toolkit: compatibility with scikit-learn.


Scikit-learn is a very popular data science library which is enormously useful for training established machine learning algorithms, computing basic metrics, and building model pipelines. In fact, many of our example notebooks already use scikit-learn classifiers with pre-processing or post-processing workflows.

Unfortunately, switching between AIF360 algorithms and scikit-learn algorithms breaks the workflow and forces the user to convert data structures back and forth. We’re also unable to leverage some of the powerful meta-programming tools from scikit-learn like pipelines and cross validation.

The latest version release of AIF360, 0.3.0, brings a number of changes, but the highlight is the addition of the new aif360.sklearn module. This is where you can find all the currently completed scikit-learn-compatible AIF360 functionality.

Note: This is still a work-in-progress and not all functionality is yet migrated. Interested developers can also check out the “sklearn-compat” development branch to get the latest features and make contributions.

The vision for this update is to make AIF360 functionality interchangeable with scikit-learn functionality. Algorithms can be swapped with debiasing algorithms and metrics can be swapped with fairness metrics. For example, instead of a simple LogisticRegression classifier, you can use an AdversarialDebiasing classifier instead of just the recall_score, you can measure the equal_opportunity_difference or difference in recall between protected groups. All of this should be as easy as swapping a line of code.

However, in order to incorporate fairness features into algorithms, we are not able to ensure complete compatibility with scikit-learn in some cases. For example, some scikit-learn preprocessors like sklearn.decomposition.pca will strip the sample properties such as protected attributes and thus cause errors to AIF360 algorithms later in the pipeline. Read on to learn about workarounds and other caveats for working with scikit-learn.

Old API remains

The old API will remain for the foreseeable future as we continue to replicate its functionality in the new API. Once this is completed, we may choose to deprecate support for the old API within a few versions but this will be communicated clearly when the time comes and depends on feedback from the community.

Capabilities overview

Again, this is a work-in-progress. Many features are still missing and the API is still somewhat experimental. User feedback and contributions are critical for this project.

For more in-depth explanations of the capabilities, see the API reference. For an interactive demonstration of the capabilities, we have an example notebook.

Datasets

Four of the five datasets included in AIF360 are replicated here: Adult Census Income, German Credit, Bank Marketing, and COMPAS Recidivism. They now download automatically from OpenML the first time the corresponding function is called and are cached for later reuse.

The data structure is simplified as well. The data is separated into familiar X features and y target values as well as sample_weight if available. Each variable is returned as a Pandas DataFrame object with the original data values (e.g. string category values) by default and protected attribute values per sample in the index.

For example, if this is the input:

from aif360.sklearn.datasets import fetch_compas

X, y = fetch_compas(binary_race=True)
X.head()

The output would be something like this:

sex age age_cat race juv_fel_count juv_misd_count juv_other_count priors_count c_charge_degree c_charge_desc
id sex race
3 Male African-American Male 34 25 – 45 African-American 0 0 0 0 F Felony Battery w/Prior Convict
4 Male African-American Male 24 Less than 25 African-American 0 0 1 4 F Possession of Cocaine
8 Male Caucasian Male 41 25 – 45 Caucasian 0 0 0 14 F Possession Burglary Tools
10 Female Caucasian Female 39 25 – 45 Caucasian 0 0 0 0 M Battery
14 Male Caucasian Male 27 25 – 45 Caucasian 0 0 0 0 F Poss 3,4 MDMA (Ecstasy)

Now, let’s encode the protected attributes as 0 or 1. Since the default ordering of the categories assigns 0 to unprivileged attributes and 1 to privileged ones, this will make things easier when calculating metrics as we can make use of the default priv_group=1.

import pandas as pd

X.index = pd.MultiIndex.from_arrays(X.index.codes, names=X.index.names)
y.index = pd.MultiIndex.from_arrays(y.index.codes, names=y.index.names)

We can also flip the labels since recidivism is unfavorable. This isn’t strictly necessary but it saves us from having to provide the pos_label for all the metrics.

y = 1 - pd.Series(y.factorize(sort=True)[0], index=y.index)

As previously mentioned, some scikit-learn steps will strip the formatting containing protected attribute information from the data. This makes it difficult to use most sklearn.preprocessing steps such as input normalization and one-hot encoding. Below, we show another workaround but in the interest of simplicity, we hope to work closer with the scikit-learn community to make this work seamlessly.

from sklearn.model_selection import train_test_split
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import OneHotEncoder, StandardScaler

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1234567)
data_preproc = make_column_transformer(
        (OneHotEncoder(sparse=False, handle_unknown='ignore'), X_train.dtypes == 'category'),
        remainder=StandardScaler())

X_train = pd.DataFrame(data_preproc.fit_transform(X_train), index=X_train.index)
X_test = pd.DataFrame(data_preproc.transform(X_test), index=X_test.index)

Algorithms

Three algorithms are included in the initial release of aif360.sklearn: one pre-processor (Reweighing), one in-processor (Adversarial Debiasing), and one post-processor (Calibrated Equalized Odds). We welcome contributions from the community as we work to make all 11 of the algorithms available (and any new ones as well, of course).

Adversarial Debiasing works very much like any other scikit-learn Estimator – it trains with the fit() method and can return both “hard” (predict()) and “soft” (predict_proba()) predictions.

from aif360.sklearn.inprocessing import AdversarialDebiasing
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)

adv_deb = AdversarialDebiasing(prot_attr='race', adversary_loss_weight=1.0, random_state=1234567)
adv_deb.fit(X_train, y_train)
y_pred_AD = adv_deb.predict(X_test)
adv_deb.sess_.close()

Reweighing breaks the scikit-learn API conventions a bit since it needs to return new sample weights from transform(). As a workaround, we have included a meta-estimator which combines the reweigher and an arbitrary estimator in a single fit() step.

from aif360.sklearn.preprocessing import ReweighingMeta, Reweighing
from sklearn.linear_model import LogisticRegression

lr = LogisticRegression(solver='liblinear')
rew = ReweighingMeta(estimator=lr, reweigher=Reweighing('race'))
rew.fit(X_train, y_train)
y_pred_REW = rew.predict(X_test)

The Calibrated Equalized Odds post-processor also requires a workaround. Post-processors train on predictions from a black-box estimator and ground-truth values to produce fairer predictions. This alone is without precedent in scikit-learn. Furthermore, to avoid data leakage, the training set for the post-processor should differ from the training set of the estimator. The PostProcessingMeta class takes care of both of these issues by combining the training and prediction of an arbitrary estimator and the post-processor while seamlessly splitting the dataset.

from aif360.sklearn.postprocessing import CalibratedEqualizedOdds, PostProcessingMeta

pp = CalibratedEqualizedOdds('race', cost_constraint='fnr', random_state=1234567)
ceo = PostProcessingMeta(estimator=lr, postprocessor=pp, random_state=1234567)
ceo.fit(X_train, y_train)
y_pred_CEO = ceo.predict(X_test)
y_proba_CEO = ceo.predict_proba(X_test)

Metrics

Most of the fairness metrics have been reproduced as standalone functions. This means it is no longer necessary to create an object for each pair of predictions and ground-truth labels but it does make each function call’s syntax longer. Furthermore, since the inputs are also valid for functions from sklearn.metrics, we avoid reimplementing functions like accuracy_score and recall_score.

Additionally, the newly ported metrics can be used as scorers in a grid search, for example.

from aif360.sklearn.metrics import disparate_impact_ratio

train_di = disparate_impact_ratio(y_test, prot_attr='race')

print(f'Training set disparate impact: {train_di:.3f}')

Training set disparate impact: 0.773

from aif360.sklearn.metrics import average_odds_error
from sklearn.metrics import accuracy_score

acc_AD = accuracy_score(y_test, y_pred_AD)
deo_AD = average_odds_error(y_test, y_pred_AD, prot_attr='race')

print(f'[Adversarial Debiasing] Test accuracy: {acc_AD:.2%}')
print(f'[Adversarial Debiasing] Test equal odds measure: {deo_AD:.3f}')
[Adversarial Debiasing] Test accuracy: 65.43%
[Adversarial Debiasing] Test equal odds measure 0.091
acc_REW = accuracy_score(y_test, y_pred_REW)
di_REW = disparate_impact_ratio(y_test, y_pred_REW, prot_attr='race')

print(f'[Reweighing] Test accuracy: {acc_REW:.2%}')
print(f'[Reweighing] Test disparate impact: {di_REW:.3f}')
[Reweighing] Test accuracy: 66.64%
[Reweighing] Test disparate impace 0.893
from aif360.sklearn.metrics import difference, generalized_fnr

acc_CEO = accuracy_score(y_test, y_pred_CEO)
dfnr_CEO = difference(generalized_fnr, y_test, y_proba_CEO[:, 1], prot_attr='race')

print(f'[Calibrated Equalized Odds] Test accuracy: {acc_CEO:.2%}')
print(f'[Calibrated Equalized Odds] Test FNR difference: {dfnr_CEO:.3f}')
[Calibrated Equalized Odds] Test accuracy: 63.99%
[Calibrated Equalized Odds] Test FNR difference 0.053

Looking for contributions

On that note, we’re always looking for new open-source contributors! Now that we have examples of how to modify each algorithm type, if you would like to help improve the project, feel free to choose an unimplemented algorithm by posting on the GitHub issue and migrate it. We also have a Slack channel devoted to this undertaking, #sklearn-compat, where you can ask questions and provide feedback.