This tutorial is part of the Getting started with IBM Cloud Pak for Data learning path.
|100||Introduction to IBM Cloud Pak for Data||Article|
|101||Data Virtualization on IBM Cloud Pak for Data||Tutorial|
|201||Data visualization with Data Refinery||Tutorial|
|202||Find, prepare, and understand data with Watson Knowledge Catalog||Tutorial|
|301A||Data analysis, model building, and deploying with Watson Machine Learning with notebook||Pattern|
|301B||Automate model building with AutoAI||Tutorial|
|301C||Build a predictive machine learning model quickly and easily with IBM SPSS Modeler||Tutorial|
|401||Monitor the model with Watson OpenScale||Pattern|
Data Refinery is part of IBM Watson® and comes with IBM Watson Studio on the public IBM Cloud® and IBM Watson Knowledge Catalog running on-premises using IBM Cloud Pak® for Data. It’s a self-service data-preparation client for data scientists, data engineers, and business analysts. With it, you can quickly transform large amounts of raw data into quality consumable information that’s ready for analytics. Data Refinery makes it easy to explore, prepare, and deliver data that people across your organization can trust.
In this tutorial, you will learn how to:
- Load data into the IBM Cloud Pak for Data platform for use with Data Refinery.
- Transform a sample data set, either by entering command-line R code or selecting menu operations.
- Use Data Flow steps to keep track of your work.
- Visualize data with charts and graphs.
Completing this tutorial should take about 45 minutes.
Step 1. Load the billing.csv data into Data Refinery
Download the billing.csv file.
From the Project home, click on the Assets tab. Next, either drag and drop the downloaded billing.csv file to the right-hand side pane where it says Drop files here or browse for files to upload, or click on browse and choose the downloaded billing.csv file.
Click on the newly added billing.csv file.
You should be able to see the data as shown below. Click on Refine.
Data Refinery should launch and open the data.
Click the X by the Details button to close it.
Step 2. Refine your data
We’ll start out on the Data tab. Transform your sample data set by entering R code in the command line or selecting operations from the menu. For example, type
filter on the command line and observe that autocomplete will give hints on the syntax and how to use the command.
Alternatively, hover over an operation or function name to see a description and detailed information for completing the command. When you’re ready, click Apply to apply the operation to your data set, then click the +Operation button.
We notice that TotalCharges is a string, but since it represents a decimal number, let’s convert the values to decimal. Choose the Operator Convert Column Type.
Click + Select column, then pick Column > TotalCharges and Type > Decimal, then click Apply.
We want to make sure that there are no empty values, and there happen to be some for the TotalCharges column, so let’s fix that. Click on the operation Filter and choose the TotalCharges column from the drop-down, operator Is empty, then Apply.
We can see that there are only three rows with an empty value for TotalCharges.
It should be safe to just drop these rows from the data set, so let’s do that.
Remove the filter you just added. You can delete it using one of the following methods:
- Hover over the corresponding step in the Steps section and the delete icon (trash can) will appear. Click on this icon to remove the filter.
- Click the undo arrow at the top of the page.
Next, choose the operation Remove empty rows, select the TotalCharges column, click Next, then click Apply on the next screen.
Finally, we can remove the CustomerID column, since that won’t be useful for training a machine learning model in the next exercise. Choose the Remove operator, then Change column selection. Under Select a column, pick CustomerID, then Next, then Apply.
Step 3. Use data flow steps to keep track of your work
What if we do something we don’t want? Data Refinery keeps track of the steps and we can undo (or redo) an action using the circular arrows.
As you refine your data, the IBM Data Refinery keeps track of the steps in your data flow. You can modify them and even select a step to return to a particular moment in your data’s transformation.
To see the steps in the data flow that you have performed, click the Steps button. The operations you have performed on the data will be shown.
You can modify these steps in real time and save for future use.
Step 4. Profile the data
Clicking on the Profile tab will bring up a quick view of several histograms about the data.
You can get insights into the data from the histograms:
- Twice as many customers are month to month as are a one- or two-year contract.
- More choose paperless billing, but around 40 percent still prefer a paper bill sent to them.
- You can see the distribution of MonthlyCharges and TotalCharges.
- From the Churn column, you can see that a significant number of customers will cancel their service.
Step 5. Visualize with charts and graphs
Choose the Visualizations tab to bring up an option to choose which columns to visualize. Under Columns to Visualize, choose TotalCharges and click Visualize data.
We first see the data in a histogram by default. You can choose other chart types. We’ll pick Scatter plot next by clicking on it.
In the scatter plot, choose TotalCharges for the x-axis, MonthlyCharges for the y-axis, and Churn for the color map.
Scroll down and give the scatter plot a title and sub-title if you wish. Under the Actions panel, notice that you can perform tasks such as start over, download chart details, display data label in chart, download chart image, or set global visualization preferences (hover over the icons to see the names). Click on the gear icon in the Actions panel.
We see that we can do things in the global visualization preferences for titles, tools, color schemes, and notifications. Click on the Theme tab, update the color scheme to Vivid, then click Apply.
Now the colors for all of our charts will be reflected.
This tutorial showed you a small sampling of the power of Data Refinery on IBM Cloud Pak for Data. The tutorial also explained how you can transform data using R code at the command line, using various operations on the columns, such as changing the data type, removing empty rows, or deleting columns altogether. The tutorial also explained that all the steps in our data flow are recorded, so you can remove steps, repeat them, or edit an individual step. It showed how you can quickly profile data to see histograms and statistics for each column. And finally, it explained how you can create more in-depth visualizations and create a scatter-plot mapping total charges vs. monthly charges, with the churn results highlighted in color.
This tutorial is part of the Getting started with IBM Cloud Pak for Data learning path. To continue the series and learn more about IBM Cloud Pak for Data, take a look at Find, prepare, and understand data with Watson Knowledge Catalog.