In this pattern, learn how to create and deploy deep learning models by using a Jupyter Notebook in an IBM Watson Studio environment. You also create deep learning experiments with hyperparameters optimization by using a Watson Studio GUI for monitoring different runs, then select the best model for deployment.
Computer vision is on the rise, and there might be scenarios where a machine must classify images based on their class to aid in the decision-making process. In this code pattern, we demonstrate how to do multiclass classification (with three classes) by using IBM® Watson™ Studio and IBM Deep Learning as a Service. We use yoga postures data to identify the class given an image. This methodology can be applied to any domain and data set that requires multiple classes of images to be classified accurately and can be extended for further analysis.
IBM Deep Learning as a Service enables organizations to overcome the common barriers to deep learning deployment: skills, standardization, and complexity. It embraces a wide variety of popular open source frameworks like TensorFlow, Caffe, PyTorch, and others, and offers them as a cloud-native service on IBM Cloud, lowering the barrier to entry for deep learning. It combines the flexibility, ease-of-use, and economics of a cloud service with the compute power of deep learning. With easy-to-use REST APIs, you can train deep learning models with different amounts of resources per user requirements or budgets.
Currently, training of deep neural networks is highly complex and computationally intensive. It requires a highly tuned system with the right combination of software, drivers, computing power, memory, network, and storage resources. To realize the full potential of deep learning, we want the technology to be more easily accessible to developers and data scientists so that they can focus more on doing what they do best, concentrating on data and its refinements, training neural network models with automation over these large data sets, and creating cutting-edge models.
In this code pattern, we demonstrate how to create and deploy deep learning models by using a Jupyter Notebook (using CPU) in a Watson Studio environment. You also create deep learning experiments (using GPU) with hyperparameter optimization by using a Watson Studio GUI for monitoring different runs, then select the best model for deployment.
When you have completed this code pattern, you understand how to:
- Preprocess the images to get them ready for model building
- Access the image data from IBM Cloud Object Storage and write the predicted output to Cloud Object Storage
- Create a step-by-step deep learning model (code-based) that includes flexible hyperparameters to classify the images accurately
- Create experiments in Watson Studio (GUI-based) for deploying state-of-the-art models with hyperparameter optimization
- Create visualizations for a better understanding of the model predictions
- Interpret the model summary and generate predictions using the test data
- Analyze the results for further processing to generate recommendations or taking informed decisions
- User uploads image data to IBM Cloud Object Storage.
- User accesses the data in a Jupyter Notebook.
- User runs the baseline model Notebook that has the deep learning CNN model along with tunable hyperparameters.
- Notebook trains on the sample images from the train and validation data sets and classifies the test data images using the deep learning model.
- User can classify images into different classes using a REST client.
- User can write the predicted output to Cloud Object Storage in a .csv format that can be downloaded for further analysis.
Find the detailed steps for this pattern in the README. Those steps show you how to:
- Create an account with IBM Cloud.
- Create a new Watson Studio project.
- Create the Notebook.
- Add the data.
- Insert the credentials.
- Run the Notebook.
- Analyze the results.
- Access the Cloud Object Storage bucket.
- Run the Notebook and publish it to Watson Machine Learning.
- Create experiments using GPU for hyperparameter optimization.