This post is co-authored by Angel Diaz, Vice President, Developer Technology and Advocacy; Ruchir Puri, Chief Architect, IBM Watson; and Rania Khalaf, Director, AI Engineering, IBM Research

Delivering deep learning capabilities to the hands of data scientists and AI developers

According to Gartner, artificial intelligence will be the most disruptive class of technology over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep learning. The rise of deep learning has been fueled by three recent trends: the explosion in the amount of training data; the use of accelerators such as graphics processing units (GPUs); and the advancement in training algorithms and neural network architectures.

To realize the full potential of this rising trend, we want the technology to be easily accessible to the people it matters most to: data scientists and AI developers. Training deep neural networks, known as deep learning, is currently highly complex and computationally intensive. It requires a highly tuned system with the right combination of software, drivers, compute, memory, network, and storage resources. Data scientists and AI developers should be focused on doing what they do best: focusing on data and its refinements, training neural network models (with automation) over these large data sets, and creating cutting edge models.

So we are happy to announce the launch of Deep Learning as a Service within Watson Studio. It embraces a wide array of popular open source frameworks such as TensorFlow, Caffe, and PyTorch, and offers them truly as a cloud native service on IBM Cloud, lowering the barrier to entry for deep learning. It combines the flexibility, ease of use, and economics of a cloud service with the power of deep learning. With easy-to-use REST APIs, you can train deep learning models with different amounts of resources per user requirements or budget. It is resilient (handles failures), and it frees data scientists and AI developers so that they can spend time on deep learning and its applications.

Contribute to the revolution: It’s open source!

IBM has a long history in establishing open source centers of gravity — most recently across Cloud, Data, AI and Transactions. We are contributing the core of Watson Studio’s Deep Learning Service as an open source project: Fabric for Deep Learning, or FfDL (pronounced “fiddle”). Leveraging the power of Kubernetes, FfDL provides a scalable, resilient, and fault-tolerant deep-learning framework. The platform uses a distribution and orchestration layer that facilitates learning from a large amount of data in a reasonable amount of time across compute nodes. A resource-provisioning layer enables flexible job management on heterogeneous resources, such as GPUs and CPUs on top of Kubernetes.

Jim Zemlin, Executive Director of The Linux Foundation, echoes these sentiments succinctly:

“Just as The Linux Foundation worked with IBM, Google, Red Hat and others to establish the open governance community for Kubernetes with the Cloud Native Computing Foundation, we see IBM’s release of Fabric for Deep Learning, or FfDL, as an opportunity to work with the open source community to align related open source projects, taking one more step toward making deep learning accessible. We think its origin as an IBM product will appeal to open source developers and enterprise end users.”

FfDL Architecture diagram

Deep Learning as a Service within Watson Studio was created from the start in close collaboration with deep learning developers across speech, vision, and natural language classification domains. These insights shaped its design, providing guidance into synergies between different workloads and enabling us to provide a true cloud native service, where the infrastructure is shared across workloads while providing a common API-based access. Fabric for Deep Learning, FfDL, leverages this learning in the open source framework.

Try it today and join the deep learning revolution!

Tarry Singh, Founder and AI Neuroscience Researcher at deepkapha.ai, who is collaborating with us on FfDL, captures the project’s vision perfectly:

“We are working with IBM and FfDL to further expand our vision at DeepKapha — bringing deep learning to the masses, with a holistic stack for cognitive algorithm across stacks (vertically) and cloud platforms (horizontally).”

For a truly cloud native, resilient, and scalable experience with enterprise-rich features, please try Deep Learning as a Service within Watson Studio. Deploy FfDL, use it, and extend it. We are looking forward to your feedback.

Let’s start the revolution for the democratization of deep learning!

Related Links

2 comments on"Fabric for Deep Learning"

  1. […] The Release of Fabric for Deep Learning (FfDL) open source technology. Developed by IBM researchers, FfDL furthers IBM’s long history in establishing open source centers of gravity in frameworks, this time for the deep learning service capability as a cloud native service on IBM Cloud. […]

  2. […] recently released some of this core capability [1, 2] out as the Fabric for Deep Learning (FfDL), a cloud-native micro-services based fabric on top of Kubernetes and invite the community to participate, experiment, and contribute to the innovation possibilities […]

Join The Discussion

Your email address will not be published. Required fields are marked *