Deploy and use a multi-framework deep learning platform on Kubernetes  

Install and consume a deep learning platform on Kubernetes with TensorFlow, Caffe, PyTorch, and more

Last updated | By Animesh Singh, Scott Boag, Tommy Li, Waldemar Hummer

Description

As a Deep Learning practitioner, you want reliability and scalability while orchestrating your training jobs. In addition, you would want to do this in a consistent manner across multiple libraries. With Fabric for Deep Learning (FfDL) on Kubernetes, you can achieve this by giving users the ability to leverage deep learning libraries such as Caffe, Torch, and TensorFlow in the cloud in a resilient manner with minimal effort. The platform uses a distribution and orchestration layer that facilitates learning from a large amount of data in a reasonable amount of time across compute nodes. A resource provisioning layer enables flexible job management on heterogeneous resources, such as graphics processing units (GPUs) and central processing units (CPUs), in an infrastructure as a service (IaaS) cloud.”

Overview

Training deep neural networks, known as deep learning (part of machine learning methods), is highly complex and computationally intensive. A typical user of deep learning is unnecessarily exposed to the details of the underlying hardware and software infrastructure, including configuring expensive GPU machines, installing deep learning libraries, and managing the jobs during execution to handle failures and recovery. Despite the ease of obtaining hardware from IaaS clouds and paying by the hour, the user still needs to manage those machines, install required libraries, and ensure resiliency of the deep learning training jobs.

This is where the opportunity of deep learning as a service lies. In this code pattern, we show you how to deploy a deep learning Fabric on Kubernetes. By using cloud native architectural artifacts like Kubernetes, microservices, Helm charts, and object storage, we show you how to deploy and use a deep learning Fabric. This Fabric spans across multiple deep learning engines like TensorFlow, Caffe, and PyTorch. It combines the flexibility, ease-of-use, and economics of a cloud service with the power of deep learning. You’ll find it easy to use and by using REST APIs, you can customize the training with different resources per user requirements or budget. Allow users to focus on deep learning and the applications instead of focusing on faults.

Flow

  1. The FfDL deployer deploys the FfDL code base to a Kubernetes cluster. The Kubernetes cluster is configured to used GPUs, CPUs, or both, and has access to S3-compatible object storage. If not specified, a locally simulated S3 pod is created.
  2. Once deployed, the data scientist uploads the model training data to the S3-compatible object store. FfDL assumes the data is already in the required format as prescribed by different deep learning frameworks.
  3. The user creates a FfDL Model manifest file. The manifest file contains different fields that describe the model in FfDL, its object store information, its resource requirements, and several arguments (including hyperparameters) that are required for model execution during training and testing. The user then interacts with FfDL by using CLI/SDK or UI to deploy the FfDL model manifest file with a model definition file. The user launches the training job and monitors its progress.
  4. The user downloads the trained model and associated logs once the training job is complete.

Instructions

Find the detailed steps for this pattern in the README. The steps will show you how to:
  1. Compile and code and build Docker images.
  2. Install the FfDL components with helm install.
  3. Run a script to configure Grafana for monitoring FfDL.
  4. Obtain your Grafana, FfDL Web UI, and FfDL REST API endpoints.
  5. Run some simple jobs to train a convolutional network model by using TensorFlow and Caffe.

Related Blogs

Jax 2018 – Just An Awesome Experience

What a week! From 23rd to 27th April our Berlin team attended the Jax conference in Mainz, Germany. We had such a great time sharing our fresh perspectives, in the form of a rousing keynote and two informative sessions. The concept of this annual event with over 2,000 participants, revolves around innovating with Java, architecture,...

Continue reading Jax 2018 – Just An Awesome Experience

CloudNativeCon and KubeCon are coming to Copenhagen!

With May just around the corner, mark your calendars for an exciting event, CloudNativeCon/KubeCon, in Denmark’s capital city of Copenhagen. Many of us in the Cloud Native community already visited this beautiful city for DockerCon EU last year and we’re excited to be able to take in all of the wonderful sites again this year....

Continue reading CloudNativeCon and KubeCon are coming to Copenhagen!

Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Event-driven analytics requires a data management system that can scale to allow a high rate of incoming events while optimizing to allow immediate analytics. IBM Db2 Event Store extends Apache Spark to provide accelerated queries and lightning fast inserts. This code pattern is a simple introduction to get you started with event-driven analytics. You can...

Continue reading Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Related Links

Deep Learning as a Service

The two trends, deep learning and “as-a-service,”are colliding to give rise to a new business model for cognitive application delivery.