Learn more >
Get the code
View the demo
by Anjali Shah, Rui Fan, Daniel Kikuchi, Mark Sturdevant | Published February 8, 2019
AnalyticsArtificial intelligenceData scienceData storesDatabasesMachine learningPython
In this code pattern, we’ll use IBM Cloud Pak for Data and load customer demographic and trading activity data into IBM Db2 Warehouse. From there, we’ll analyze the data using a Jupyter notebook with Brunel visualizations. Finally, we’ll use Spark machine learning library to create a model that predicts customer churn risk. The model will be deployed as a web service and used for inference in an application.
This code pattern demonstrates the use of a Jupyter notebook to interact with Db2 Warehouse, Brunel visualizations, and Spark machine learning library — from the creation of database objects to advanced analytics and machine learning model development and deployment. This code pattern is built on IBM Cloud Pak for Data, an open, cloud-native information architecture for AI. With this integrated, fully governed team platform, you can keep your data secure at its source and add preferred data and analytics microservices flexibly. Simplify how you collect, organize, and analyze data to infuse AI across your business.
The sample data used in this code pattern provides customer demographics and trading activity for an online stock trading company. In this use case, the company would like to predict the risk of customer churn and integrate targeted incentives into their user-facing applications.
After completing this code pattern, you’ll understand how to:
Find the detailed steps for this pattern in the readme file. The steps will show you how to:
May 13, 2019
Get the Code »
Back to top