Monitor custom machine learning engine with Watson OpenScale

Get the code

Summary

In this developer code pattern, we will log the payload for a model deployed on a custom model serving engine using Watson OpenScale Python SDK. We’ll use Keras to build a deep learning REST API and monitor with Watson OpenScale.

Description

This pattern describes a method to use Watson OpenScale and a custom machine learning model serving engine. With Watson OpenScale, we can monitor model quality and log payloads, regardless of where the model is hosted. In this case, we use the example of a custom model serving application, which demonstrates the agnostic and open nature of Watson OpenScale.

IBM Watson OpenScale is an open environment that enables organizations to automate and operationalize their AI. OpenScale provides a powerful platform for managing AI and ML models on the IBM Cloud, or wherever they may be deployed, offering these benefits:

  • Open by design — Watson OpenScale allows monitoring and management of ML and DL models built using any frameworks or IDEs and deployed on any model hosting engine.
  • Drive fairer outcomes — Watson OpenScale detects and helps mitigate model biases to highlight fairness issues. The platform provides plain text explanation of the data ranges that have been impacted by bias in the model, and visualizations helping data scientists and business users understand the impact on business outcomes. As biases are detected, Watson OpenScale automatically creates a de-biased companion model that runs beside deployed model, thereby previewing the expected fairer outcomes to users without replacing the original.
  • Explain transactions — Watson OpenScale helps enterprises bring transparency and auditability to AI-infused applications by generating explanations for individual transactions being scored, including the attributes used to make the prediction and weightage of each attribute.
  • Automate the creation of AI — Neural Network Synthesis (NeuNetS), available in this update as a beta, synthesizes neural networks by fundamentally architecting a custom design for a given data set. In the beta, NeuNetS will support image and text classification models. NeuNetS reduces the time and lowers the skill barrier required to design and train custom neural networks, thereby putting neural networks within the reach of non-technical subject-matter experts, as well as making data scientists more productive.

When you have completed this code pattern, you’ll understand how to:

  • Build a custom model serving engine using Keras
  • Access the custom model using a REST API
  • Log the payload for the model using Watson OpenScale

Flow

Machine learning engine flow diagram

  1. User deploys application server on the IBM Cloud using Kubernetes and Docker.
  2. User creates a Jupyter Notebook on Watson™ Studio and configures Watson OpenScale and Compose PostgreSQL.
  3. Watson OpenScale is used to monitor a machine learning model for payload logging and quality.
  4. The application server is used for scoring the deployed model.

Instructions

Find the detailed steps for this pattern in the README. The steps will show you how to:

  1. Clone the repo.
  2. Create Watson services with IBM Cloud.
  3. Create a notebook in IBM Watson Studio for use with a publicly addressed server or run the notebook locally for local testing only.
  4. Perform either 4a for use with Watson Stuido or 4b for local testing only:
    • 4a. Run the application server in a Kubernetes cluster
    • 4b. Run the application server locally.
  5. Run the notebook in IBM Watson Studio.