Monitor custom machine learning engine with Watson OpenScale
Deploy a custom machine learning engine using Docker and Kubernetes, and monitor payload logging and fairness using Watson OpenScale
In this developer code pattern, we will log the payload for a model deployed on a custom model serving engine using Watson OpenScale Python SDK. We’ll use Keras to build a deep learning REST API and monitor with Watson OpenScale.
This pattern describes a method to use Watson OpenScale and a custom machine learning model serving engine. With Watson OpenScale, we can monitor model quality and log payloads, regardless of where the model is hosted. In this case, we use the example of a custom model serving application, which demonstrates the agnostic and open nature of Watson OpenScale.
IBM Watson OpenScale is an open environment that enables organizations to automate and operationalize their AI. OpenScale provides a powerful platform for managing AI and ML models on the IBM Cloud, or wherever they may be deployed, offering these benefits:
- Open by design — Watson OpenScale allows monitoring and management of ML and DL models built using any frameworks or IDEs and deployed on any model hosting engine.
- Drive fairer outcomes — Watson OpenScale detects and helps mitigate model biases to highlight fairness issues. The platform provides plain text explanation of the data ranges that have been impacted by bias in the model, and visualizations helping data scientists and business users understand the impact on business outcomes. As biases are detected, Watson OpenScale automatically creates a de-biased companion model that runs beside deployed model, thereby previewing the expected fairer outcomes to users without replacing the original.
- Explain transactions — Watson OpenScale helps enterprises bring transparency and auditability to AI-infused applications by generating explanations for individual transactions being scored, including the attributes used to make the prediction and weightage of each attribute.
When you have completed this code pattern, you’ll understand how to:
- Build a custom model serving engine using Keras
- Access the custom model using a REST API
- Log the payload for the model using Watson OpenScale
- User deploys application server on the IBM Cloud using Kubernetes and Docker.
- User creates a Jupyter Notebook on Watson™ Studio and configures Watson OpenScale and Compose PostgreSQL.
- Watson OpenScale is used to monitor a machine learning model for payload logging and quality.
- The application server is used for scoring the deployed model.
Find the detailed steps for this pattern in the README. The steps will show you how to:
- Clone the repo.
- Create Watson services with IBM Cloud.
- Create a notebook in IBM Watson Studio for use with a publicly addressed server or run the notebook locally for local testing only.
- Perform either 4i for use with Watson Stuido or 4ii for local testing only:
- Run the application server in a Kubernetes cluster.
- Run the application server locally.
- Run the notebook in IBM Watson Studio.