IBM Cloud Satellite: Run and manage services anywhere Learn more

Monitor model drift with Watson OpenScale

Using an AI-infused Modern bank loan application to make predictions about approving or rejecting a loan can reduce the job of the bank loan agent significantly. However, when there is a dispute in the result the agent becomes liable to explain the reasons behind these outcomes.

Assume that a customer applies for a loan within a bank and the loan is rejected. A natural step would be for the customer to reach out to a loan representative that works for the bank for an explanation and to know if the results were fair. After an initial analysis, the loan representative would pass it off to the IT department to do a detailed analysis. This request is then routed to the data scientist that developed the initial model.

customer loan rejected

After the initial analysis and rechecking the model parameters, the data scientist looking for answers tries to determine what changed with this model. As you saw in this tutorial, the machine learning model was build within IBM® Watson™ Studio using a Jupyter Notebook and was deployed using Watson Machine Learning. To analyze the model’s behavior, the data scientist sets up the model to be monitored by Watson OpenScale. Watson OpenScale is available within IBM Cloud Pak® for Data as a service similar to Watson Studio and Watson Machine Learning.

data scientist monitors model

Watson OpenScale contains monitors that analyze quality, fairness, explainability, and drift. In this tutorial, you learn how to configure a drift monitor that detects when the model drops in accuracy or starts receiving data inconsistent with how it was trained.

Drift monitoring

In the machine learning lifecycle, drift refers to changes in the performance of the model over time. The prediction data processed through the model impacts the accuracy of the model and affects the business processes using the model. Watson OpenScale analyzes transactions to detect drift in model accuracy as well as drift in data.

  • Drift in model accuracy happens if there is an increase in transactions that are similar to those that the model did not evaluate correctly in the training data.
  • Drift in data estimates the drop in consistency of the data at run time as compared to the characteristics of the data at training time.

After identifying drift as the potential issue within the model, you have two options:

  1. Contact the application developer who can redeploy the application to point to the new REST end point of the new model generated by Watson OpenScale.

  2. Contact the data steward to make modification to the data that will support model retraining.

Get started with drift monitoring using Watson OpenScale

Verify Watson OpenScale setup

  1. Launch a browser and navigate to your IBM Cloud Pak for Data deployment.

    Cloud Pak for Data login

  2. Click the services icon at the upper-right corner.

    Open Services

  3. Look at the Watson OpenScale tile and ensure that it has the Enabled tag on it.

    Select OpenScale

There are several ways to configure Watson OpenScale to monitor a machine learning model. And, you can use the automated setup to quickly set it up. In this tutorial, I configure the drift monitor through Python APIs in a Juptyer Notebook.

Note: Keep this browser window open while you continue with the setup through the Jupyter Notebook because you will analyze changes on the dashboard as you run the cells.

Create a project and deployment space

Create a new project

IBM Cloud Pak for Data uses projects to collect and organize resources. Your project resources can include data, collaborators, and analytic assets like notebooks and models.

  1. Click Projects in the left menu.

    Projects

  2. Click New project +.

    Start a new project

  3. Select Analytics project for the project type, and click Next.

    Select project type

  4. Select Create an empty project.

    Create empty project

  5. Name the project, and click Create.

    Click create

Create a deployment space

IBM Cloud Pak for Data uses deployment spaces to configure and manage the deployment of a set of related deployable assets. These assets can be data files, machine learning models, and so on.

  1. Click Analyze, then Analytics deployments from the left menu.

    Analytics deployments

  2. Click New deployment space +.

    Add New deployment space

  3. Select Create an empty space.

    Create empty deployment space

  4. Give your deployment space a unique name and an optional description, then click Create. You’ll use this space later when you deploy a machine learning model.

    Create deployment space

Set up drift monitor with a Jupyter Notebook

Load the notebook

You will be using the ConfigureOpenScale-Drift.ipynb notebook. You can also look at a copy of the notebook with results saved after running all the cells within.

  1. From the project overview page, click Add to project + to launch the Choose asset type window.

    Notebook Open

  2. Select Notebook from the options, and switch to the From file tab.

    Notebook Open

  3. Click Drag and drop files here or upload, upload the ConfigureOpenScale-Drift notebook, and click Create notebook. This loads the Jupyter Notebook.

Run the notebook

A notebook is composed of text (markdown or heading) cells and code cells. The markdown cells provide comments on what the code is designed to do.

You run the cells individually by highlighting each cell, then either click the Run button at the top of the notebook or use the keyboard shortcut to run the cell (Shift + Enter, but this can vary based on the platform).

While the cell is running, an asterisk ([*]) appears to the left of the cell. When that cell has finished running a sequential number appears (for example, [17]).

Note: Some of the comments in the notebook are directions for you to modify specific sections of the code. Perform any changes as indicated before running the cell.

Load and prepare data set

When the Jupyter Notebook is loaded and the kernel is ready, you are ready to start running it. Click on the pencil icon to run or edit the notebook.

Notebook loaded

The Package installation section installs some of the libraries you are going to use in the notebook (many libraries come pre-installed on IBM Cloud Pak for Data).

Model building and deployment

This section of the notebook loads the data set, builds the classification model, and deploys it using Watson Machine Learning.

After executing all of the cells under this section, the last step is to deploy the model as a RESTful web service in Watson Machine Learning.

Deploy model

Configure Watson OpenScale

In this step, you begin by importing libraries to set up a Python Watson OpenScale client.

After a datamart has been associated to save the training and feedback data that is submitted to the model:

  1. Create a subscription to get a handle of the Watson OpenScale monitor through the Python code.

    WOS subscription

  2. Switch to the window where you launched the Watson OpenScale dashboard, and you see a tile created under the Model Monitors dashboard.

    WOS deployment

  3. Click the deployment tile to open the Evaluations view. Notice that none of the monitors are configured yet. But, as you run the notebook cells, the drift monitor is enabled.

    WOS deployment

    Note: The name of the deployment is the value you set for the CUSTOM_NAME variable within the notebook.

    WOS custom name

Generate the drift model

To enable drift monitoring, you must create a model that uses payload to evaluate whether the model is undergoing drift.

In this notebook, I generate a Watson Machine Learning-based drift model using the Watson OpenScale DriftTrainer API.

Drift model

Optionally, you can import a previously created model to continue with this tutorial.

optional model

Submit payload

After the Watson Machine Learning service has been bound and the subscription has been created, you must send a request to the model before you configure Watson OpenScale. This allows Watson OpenScale to create a payload log in the datamart with the correct schema so that it can capture data coming into and out of the model.

payload

Enable drift monitoring

  1. Run the following cell to enable drift monitoring through the API.

    enable drift

  2. Switch to the Watson OpenScale dashboard window and from the deployment tile, click the three dots icon, and select Configure monitors.

    configure monitor

Note: The Drift monitor is enabled and marked with a check mark next to it. The Drift threshold and Sample size are values that were set in the Python code shown previously.

drift enabled

Run the drift monitor

The drift monitors runs every 3 hours by default on the data that was submitted within this time period. However, it can be invoked through the subscription to the monitor that you created previously, as shown in the following code.

run drift

Note: Make sure that you stop the kernel of your notebook when you are finished to conserve resources. You do this by going to the Asset page of the project, selecting the three dots under the Action column for the notebook you have been running, and selecting Stop Kernel from the Actions menu. If you see a lock icon on the notebook, click it to unlock the notebook so you can stop the kernel.

Stop kernel

At this point, you are finished executing the notebook.

Analyze model drift with Watson OpenScale GUI

Now that you have created a machine learning model and configured Watson OpenScale, you can use the Watson OpenScale dashboard to monitor the model.

The drift monitor scans the requests sent to your model deployment (that is, the payload) to ensure that the model is robust and does not drift over time. Drift in model predictions can occur either because the requests sent to the model are requests similar to samples in the training data where the model struggled or because the requests are becoming inconsistent with the training data the model originally used.

  1. Refresh the Insights dashboard page, and under the Model Monitors tab you see that there is one Drift alert. From this run, the deployment has identified 35% drift in the model.

    alert drift

  2. Click the deployment tile, and select the Drop in accuracy option on the left panel to show the model drift visualization.

    Drop in accuracy

    You see that the model has an estimated drop of 35.3% in Drop in accuracy and an estimated drop of 20% in Drop in data consistency. This means that the transaction data (scoring requests) are inconsistent compared to the training data.

  3. Click a data point in the Drop in accuracy graph (a blue line in previous image) to view drift details.

    Drift  transactions

From here, you can explore the transactions that lead to drift in accuracy as well as drifts in data consistency.

Drift responsible transactions

Note: The previous images might not match exactly what you see in your dashboard. The monitors are using the payload (scoring requests) that were sent to the model, which were randomly selected.

In some cases, you might not see a drop in accuracy from model drift. If you do not see anything in your dashboard, you can always submit a new set of requests to the model and trigger the drift evaluation again.

If you explore a transaction, it might take some time to generate transaction explanations because Watson OpenScale makes multiple requests to the deployed model. Drift is supported for structured data only and regression models only support data drift.

Conclusion

This tutorial explained how to configure a drift monitor that detects when the model drops in accuracy or starts receiving data inconsistent with how it was trained. The tutorial is part of the Modernizing your bank loan department series, a solution that shows how to automate and enhance loan transaction processes with AI.