Infuse AI into your application

Get the code View the demo

Summary

In this code pattern, we’ll use IBM Cloud Pak for Data and load customer demographic and trading activity data into IBM Db2 Warehouse. From there, we’ll analyze the data using a Jupyter notebook with Brunel visualizations. Finally, we’ll use Spark machine learning library to create a model that predicts customer churn risk. The model will be deployed as a web service and used for inference in an application.

Description

This code pattern demonstrates the use of a Jupyter notebook to interact with Db2 Warehouse, Brunel visualizations, and Spark machine learning library — from the creation of database objects to advanced analytics and machine learning model development and deployment. This code pattern is built on IBM Cloud Pak for Data, an open, cloud-native information architecture for AI. With this integrated, fully governed team platform, you can keep your data secure at its source and add preferred data and analytics microservices flexibly. Simplify how you collect, organize, and analyze data to infuse AI across your business.

The sample data used in this code pattern provides customer demographics and trading activity for an online stock trading company. In this use case, the company would like to predict the risk of customer churn and integrate targeted incentives into their user-facing applications.

After completing this code pattern, you’ll understand how to:

  • Find your way around IBM Cloud Pak for Data.
  • Load data into Db2 Warehouse.
  • Create an analytics project in IBM Cloud Pak for Data.
  • Add a remote data set to your project.
  • Use Jupyter notebooks.
  • Visualize data using Brunel charts.
  • Build and test a machine learning model with Spark MLlib.
  • Deploy the model as a web service with IBM Cloud Pak for Data.
  • Access the model from an external application for inference (churn risk prediction).

Flow

flow

  1. Data is loaded into Db2 Warehouse
  2. Jupyter notebook accesses data
  3. Jupyter notebook uses Brunel for information visualization
  4. Jupyter notebook uses Spark ML library to create a model
  5. Jupyter notebook saves the model to the repository for deployment
  6. Applications access the model via the REST API

Instructions

Find the detailed steps for this pattern in the readme file. The steps will show you how to:

  1. Clone the repo.
  2. Load the data into Db2 Warehouse.
  3. Set up an analytics project.
  4. Create the notebook.
  5. Insert a Spark DataFrame.
  6. Run the notebook.
  7. Analyze the results.
  8. Test the model in the UI.
  9. Deploy the model.
  10. Use the model in an app.