Deep learning is a vast field that employs artificial neural networks to process data and train a machine learning model. Two approaches used are supervised and unsupervised learning. In this tutorial, I focus on autoencoders, an unsupervised learning technique where the machine is using and analyzing unlabeled data sets. With this method, the model can learn patterns in the data and learn how to reconstruct the inputs as its outputs after significantly downsizing it.
Autoencoders have four main layers: encoder, bottleneck, decoder, and the reconstruction loss.
- The encoder is the given input with reduced dimensionality.
- The bottleneck is the compressed representation of the encoded data.
- The decoder is the reconstructed version of the original output.
- The reconstruction loss is the difference between the original output and the reconstructed output.
Input -> Encoder -> Bottleneck -> Decoder -> Ouput
The goal of the model is to minimize the difference between the original output and the reconstructed output, or in other words, to reduce the reconstruction loss.
Some popular applications of autoencoders are image denoising, dimensionality reduction, and feature extraction. This tutorial touches on some of these applications and introduces basic autoencoder concepts using TensorFlow, a Python library that is used to create deep learning models.
IBM Watson Studio is a data science platform that provides all of the tools necessary to develop a data-centric solution on the cloud. In this tutorial, I use an MNIST data set that is deployed on Watson Studio on IBM Cloud Pak for Data.
In the tutorial, you import a Jupyter Notebook that is written in Python into IBM Watson Studio on IBM Cloud Pak for Data as a Service, then run through the Notebook. The Notebook creates an autoencoder model by using TensorFlow based on an MNIST data set, encoding and decoding the data. After running the Notebook, you should understand how TensorFlow builds and runs an autoencoder. You learn how to:
- Run a Jupyter Notebook using Watson Studio on IBM Cloud Pak for Data as a Service
- Build an autoencoder model using TensorFlow
- Train the model and evaluate the model by performing validation and testing
The following prerequisites are required to follow the tutorial:
It should take you approximately 1 hour complete the tutorial.
- Set up IBM Cloud Pak for Data as a Service.
- Create a new project and import the Notebook.
- Read through the Notebook.
- Run the Notebook.
Step 1. Set up IBM Cloud Pak for Data as a Service
To set up IBM Cloud Pak for Data as a Service:
Open a browser, and log in to IBM Cloud with your IBM Cloud credentials.
Watson Studioin the search bar. If you already have an instance of Watson Studio, it should be visible. If so, click it. If not, click Watson Studio under Catalog Results to create a new service instance.
Select the type of plan to create if you are creating a new service instance. Click Create.
Click Get Started on the landing page for the service instance.
This takes you to the landing page for IBM Cloud Pak for Data as a Service.
Click your avatar in the upper right, then click Profile and settings under your name.
Switch to the Services tab. You should see the Watson Studio service instance listed under your Your Cloud Pak for Data services. You can also associate other services such as Watson Knowledge Catalog and Watson Machine Learning with your IBM Cloud Pak for Data as a Service account. These are listed under Try our available services.
In the example shown here, a Watson Knowledge Catalog service instance exists in the IBM Cloud account, so it’s automatically associated with the IBM Cloud Pak for Data as a Service account. To add any other service (Watson Machine Learning in this example), click Add within the tile for the service under Try our available services.
Select the type of plan to create, and click Create.
After the service instance is created, you are returned to the IBM Cloud Pak for Data as a Service instance. You should see that the service is now associated with Your IBM Cloud Pak for Data as a Service account.
Step 2. Create a new project and import the Notebook
Navigate to the menu (☰) on the left, and choose View all projects. After the screen loads, click New + or New project + to create a new project.
Select Create an empty project.
Provide a name for the project. You must associate an IBM Cloud Object Storage instance with your project. If you already have an IBM Cloud Object Storage service instance in your IBM Cloud account, it should automatically be populated here. Otherwise, click Add.
Select the type of plan to create, and click Create.
Click Refresh on the project creation page.
Click Create after you see the IBM Cloud Object Storage instance that you created displayed under Storage.
After the project is created, you can add the Notebook to the project. Click Add to project +, and select Notebook.
Switch to the From URL tab. Provide the name of the Notebook as
AutoencoderUsingTensorFlowand the Notebook URL as
Under the Select runtime drop-down menu, select Default Python 3.7 S (4 vCPU 16 GB RAM). Click Create.
After the Jupyter Notebook is loaded and the kernel is ready, you can start running the cells in the Notebook.
Important: Make sure that you stop the kernel of your Notebooks when you are done to conserve memory resources.
Note: The Jupyter Notebook included in the project has been cleared of output. If you would like to see the Notebook that has already been completed with output, refer to the example Notebook.
Step 3. Read through the Notebook
Spend some time looking through the sections of the Notebook to get an overview. A Notebook is composed of text (markdown or heading) cells and code cells. The markdown cells provide comments on what the code is designed to do.
You run cells individually by highlighting each cell, then either click Run at the top of the Notebook or use the keyboard shortcut to run the cell (Shift + Enter, but this can vary based on the platform). While the cell is running, an asterisk ([*]) appears to the left of the cell. When that cell has finished running, a sequential number appears (for example, ).
Note: Some of the comments in the Notebook are directions for you to modify specific sections of the code. Perform any changes as indicated before running the cell.
The Notebook is divided into multiple sections:
- Feature Extraction and Dimensionality Reduction
- Autoencoder Structure
- Training: Loss Function
Section 6 contains the code to create, validate, test, and run the autoencoder model.
Step 4. Run the Notebook
Run the code cells in the Notebook starting with the ones in section 4. The first few cells bring in the required modules such as TensorFlow, Numpy, reader, and the data set.
Note: The second code cell checks for the version of TensorFlow. The Notebook works only with TensorFlow version 2.2.0-rc0. Therefore, if an error is thrown here, you need to ensure that you have installed TensorFlow version 2.2.0-rc0 in the first code cell.
Note: If you have installed TensorFlow version 2.2.0-rc0 and still get the error, your changes are not being picked up, and you must restart the kernel by clicking Kernel->Restart and Clear Output. Wait until all of the outputs disappear, and then your changes should be picked up.
The training, validation, and testing of the model does not happen until the last code cell. Running all 20 epochs will take some time, approximately 30 minutes.
In this tutorial, you learned about autoencoders and ran an implementation on a Jupyter Notebook using Watson Studio on IBM Cloud Pak for Data as a Service. Utilizing Python libraries such as TensorFlow, MatPlotLib, and NumPy were also key to creating the model.