Taxonomy Icon

Analytics

Apache Spark is an open source cluster-computing framework. Spark runs on Apache Hadoop, Apache Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Apache Cassandra, Apache HBase, and Amazon S3. Also, you can use it interactively from the Scala, Python and R shells. You can run Spark using its standalone cluster mode, on an IaaS, on Hadoop YARN, or on container orchestrators like Mesos.

z/OS is an extremely scalable and secure high-performance operating system based on the 64-bit z/Architecture. z/OS is highly reliable for running mission-critical applications, and the operating system supports web- and Java-based applications.

Learning objectives

In this tutorial, we demonstrate running an analytics application using Spark on z/OS. Apache Spark on z/OS provides an in-place, optimized abstraction and real-time analysis of structured and unstructured enterprise data that is powered by z Systems Community Cloud.

z/OS Platform for Apache Spark includes a supported version of Apache Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLib) and Graphx. It also includes optimized data access to a broad set of structured and unstructured data sources through Spark APIs. With this capability, traditional z/OS data sources, such as IMS, VSAM, IBM Db2, z/OS, PDSE, or SMF data, can be accessed in a performance-optimized manner with Spark

This analytics example uses data stored in Db2 and VSAM tables, and a machine learning application written in Scala. The code also uses open source Jupyter Notebook to write and submit Scala code to your Spark instance, and view the output within a web GUI. The Jupyter Notebook is commonly used in data analytics for data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more.

The scenarios are accomplished by using:

Prerequisites

Register at z Systems Community Cloud for a trial account. You’ll receive an email containing credentials to access the self-service portal. Use these credentials to start exploring all the available services.

Estimated time

This how-to should take approximately one hour.

Steps

Start your Spark Cluster

  1. Open a web browser and enter the URL to access the z Systems Community Cloud self-service portal.

  2. Enter your Portal User ID and Portal Password, and click Sign In.

  3. You see the home page for the z Systems Community Cloud self-service portal. Click on Try Analytics Service.

  4. You now see a dashboard showing the status of your Apache Spark on z/OS instance.

At the top of the screen, notice the z/OS Status indicator, which should show the status of your instance as OK.

In the middle of the screen, the Spark Instance, Status, Data management, and Operations sections are displayed. The Spark Instance section contains your individual Spark username and IP address.

Below the field headings, you can see buttons for functions that can be applied to your instance.

GUI

The following table lists the operation for each function:

  1. If it is the first time you are trying the Analytics Service on zOS, you must set a new Spark password.

  2. Confirm your instance is Active. If it is Stopped, click Start to start it.

Upload the Db2 and VSAM data

Download all the sample files here.

Load the Db2 data file by clicking Upload Data. Select and load the Db2 DDL file and the Db2 data file. Click Upload.

“Upload Success” appears in the dashboard when the data load is complete. The VSAM data for this exercise has already been loaded for you. However, this step may be repeated by loading the VSAM copybook and VSAM data file you downloaded, from your local system.

Submit a Scala program to analyze the data

To submit a prepared Scala program to analyze the data:

  1. Click Spark Submit.
  2. Select your Spark Demo JAR file.
  3. Specify Main class name com.ibm.scalademo.ClientJoinVSAM
  4. Enter the arguments: Spark Instance Username Spark Instance Password
  5. Click Submit.

The arguments suggest you need to login to the GUI to view the job results.

“JOB Submitted” appears in the dashboard when the program is complete. This Scala program accesses Db2 and VSAM data, performs transformations on the data, joins these two tables in a Spark dataframe, and stores the result back to Db2.

Launch Spark GUI to view the submitted job

Launch your individual Spark worker output GUI to view the job you just submitted.

  1. Click Spark UI
  2. Click on the Worker ID for your program in the Completed Drivers section.
  3. Log in with your Spark username and Spark password.
  4. Click on stdout for your program in the Finished Drivers section to view your results.

Launch Jupyter Notebook and connect to Spark

Launch the Jupyter Notebook tool installed in the dashboard. This tool allows you to write and submit Scala code to your Spark instance, and view the output within a web GUI.

  1. Launch the Jupyter Notebook service in your browser from your dashboard and click on Jupyter to see the Jupyter home page.

    Jupyter

    The prepared Scala program in this level accesses Db2 and VSAM data, performs transformations on the data, joins these two tables in a Spark dataframe, and stores the result back to Db2. It also performs a logistic regression analysis and plots the output.

  2. Double click the Demo.jpynb file.

Jupyter File Select

The Jupyter Notebook connects to your Spark on z/OS instance automatically and is in the ready state when the Apache Toree – Scala indicator in the top right hand corner of the screen is clear.

Run Jupyter Notebook cells to load data and perform analysis

The Jupyter Notebook environment is divided into input cells labeled with *In

Load VSAM and Db2 data into Spark and perform a data transformation

Run cell #1 – The Scala code in the first cell loads the VSAM data (customer information) into Spark and performs a data transformation.

Click on the first In [ ]:

The left border changes to blue when a cell is in command mode, as shown below.

Before running the code, change the value of zOS_IP to your Spark IP address, the value of zOS_USERNAME to your Spark username, and the value of zOS_PASSWORD to your Spark password.

Click the run cell button indicated by the red box as shown below

The Jupyter Notebook connection to your Spark instance is in the busy state when the Apache Toree – Scala indicator in the top right hand corner of the screen is grey.

When this indicator turns clear, the cell run has completed and returned to the ready state. The output should be similar to the following:

Run cell #2 – The Scala code in the second cell loads the Db2 data (transaction data) into Spark and performs a data transformation.

Click on the next In [ ]: to select the next cell, and click the run cell button.

The output should be similar to the following:

Join the VSAM and Db2 data into dataframe in Spark

Run cell #3 – The Scala code in the third cell joins the VSAM and Db2 data into a new client_join dataframe in Spark.

Click on the next In [ ]: to select the next cell, and click the run cell button.

The output should be similar to the following:

Create a logistic regression dataframe and plot it

Run cell #4 – The Scala code in the fourth cell performs a logistic regression to evaluate the probability of customer churn as a function of customer activity level. The result_df dataframe is also created, which is used to plot the results on a line graph.

Click on the next In [ ]: to select the next cell, and click the run cell button.

The output should be similar to the following:

Run cell #5 – The Scala code in the fifth cell plots the bplot_df’ dataframe.

Click on the next In [ ]: to select the next cell, and click the run cell button.

The output should be similar to the following:

Get statistical data

To get the number of rows in the input VSAM dataset, use:

println(clientInfo_df.count())**

The result should be 6001.

To get the number of rows in the input Db2 dataset, use:

println(clientTrans_df.count())

The result should be 20000.

To get the number of rows in the joined dataset, use:

println(client_df.count())**

The result should be 112.

Summary

Congratulations on completing this how-to on running a Jupyter notebook that uses Apache Spark on z/OS! Recall that the z/OS Platform for Apache Spark includes a supported version of Apache Spark open source capabilities consisting of the Apache Spark core, Spark SQL, Spark Streaming, Machine Learning Library (MLib) and Graphx. Be sure to use these tools in conjunction with your new skills to analyze more data with Apache Spark on z/OS.