New releases now available!
Intro to deep learning and PowerAI
Bring more value to your organizationâ€™s data by developing with an entirely new approach to problems: deep learning, machine learning, and artificial intelligence. Unlock hidden potential and patterns in data organically â€” without you having to know the patterns, networks, or be an algorithmic expert. IBM is making deep learning easier and more performant for you with an enterprise software distribution with the most popular open frameworks, in PowerAI.
What is deep learning?
Deep Learning consists of algorithms that permit software to train itself— by exposing multilayered neural networks to vast amounts of data. It is most frequently used to perform tasks like speech and image recognition.
The intelligence in the process sits within the deep learning software frameworks themselves— which develop that neural model of understanding by building weights and connections between many, many data points— often millions in a training data set.
Why use deep learning?
Deep learning thrives when other traditional techniques for solving your problems fail: when you to want derive insightful or complex relationships from vast amounts of data, custom programming is impossible, or on visual or auditory data.
Deep learning does require a larger data set in order to produce results commensurate with machine learning techniques. However, the amount of interpretive work done by data scientists is minimized in deep learning scenarios. Enter IBM PowerAI.
What is IBM PowerAI?
PowerAI is an enterprise software distribution that combines popular open source deep learning frameworks, efficient AI development tools, and accelerated IBMÂ® Power Systemsâ„˘ servers to take your deep learning projects to the next level.
PowerAI includes industry leading open source frameworks and can support up to four nodes as shown in the figure below.
PowerAI Enterprise includes the open frameworks, libraries, and tools built into PowerAI plus the additional components shown in the diagram below. In addition, PowerAI Enterprise can scale from a single node to 100’s of nodes.
Which PowerAI option is right for you?
The PowerAI product line includes three different options:
Open source software well suited to developers and data scientists just getting started with their development efforts and prototypes.
Fully supported suite of deep learning frameworks intended for Enterprises looking to rapidly scale their AI applications.
Fully supported enterprise-grade suite of tools for labeling raw datasets for training, creating, and deploying deep learning-based vision models.
Continue reading for information about each of the different offerings.
Develop with PowerAI
PowerAI is an open source software distribution of popular open source deep learning frameworks, such as, Tensorflow, Keras, and Caffe.
- Deploy in hours, not months, through a binary download of the key open source frameworks
- Available paid support
Performance for faster training times
- Faster training times and incredible cluster scaling efficiency (up to 56X and 95%, demonstrated). Learn more by reading this blog, Scaling TensorFlow and Caffe to 256 GPUs
- Large model support, enabling you to use higher resolution data
- Check out all of the Machine Learning and Deep Learning performance proof-points on IBM Power Systems
Tools for ease of development
- Reduce data preparation time by an order of magnitude, with upcoming tools
- Automated hyper-parameter tuning and optimization to make your models faster and more accurate
PowerAI includes all necessary dependencies and removes the time, effort, and difficulty associated with getting a deep learning environment operational and performing optimally.
Deploy with PowerAI Enterprise
For enterprises looking to rapidly scale their deep-learning applications, PowerAI Enterprise combines the PowerAI features above with additional functionality to optimize and speed up the completion of your training, testing, and validation. PowerAI Enterprise truly shines when you are looking to expand into distributed deep learning with more than four nodes.
Spectrum Conductor, included in PowerAI Enterprise, masters job scheduling across thousands of nodes, improving system performance by 40% when compared to YARN for Spark. This reduces training time and increases the efficiency of your nodes by ensuring youâ€™re using as close to 100% of your node capacity as possible.
Analyze images and video with PowerAI Vision
PowerAI Vision provides tools and interfaces for business analysts, subject matter experts, and developers without any skills in deep learning technologies to begin using deep learning. This enterprise-grade software provides a complete ecosystem to label raw data sets for training, creating, and deploying deep learning-based models. It can help train highly accurate models to classify images and detect objects in images and videos.
The tools assist users to focus on rapidly identifying datasets and labeling them. They can then train and validate a model in a GUI interface to build customized solutions for image classification and object detection. PowerAI Vision is available as an add-on to PowerAI Enterprise.
PowerAI and PowerAI Enterprise include the following technology previews:
PyTorch Large Model Support (LMS)
Large Model Support is a feature provided in PowerAI PyTorch that allows the successful training of deep learning models that would otherwise exhaust GPU memory and abort with â€śout of memoryâ€ť errors. LMS manages this over subscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.
See “Getting started with PyTorch” topic in the IBM Knowledge Center for more information.
Caffe2 and ONNX 1.3.0
Caffe2 aims to be a production complement to PyTorch (which is focused more on experimentation and rapid development). PowerAI support for Caffe2 is included in the PyTorch package. It’s set up and activated along with PyTorch.
ONNX is the Open Neural Network Exchange format that allows developers to more easily move models between frameworks (see https://onnx.ai/). ONNX is included in PowerAI to assist with moving models between PyTorch and Caffe2. ONNX is packaged as a conda package and will be installed automatically during the install_dependencies step.
See “Getting started with Caffe2 and ONNX” topic in the IBM Knowledge Center for more information.
Snap ML on Spark (snap-ml-spark)
snap-ml-spark provides GPU accelerated classical machine learning functions into Apache Spark through pyspark.
Similar to snap-ml-mpi, the snap-ml-spark package offers distributed training of models across a cluster of machines. The library is exposed to the user via a spark.ml-like interface and can seamlessly be integrated into existing pySpark applications. For information about snap-ml-spark, see /opt/DL/snap-ml-spark/doc/README.md
Find information about getting started with SNAP ML in the IBM Knowledge Center
For more information about Snap ML and the IBM Research team that developed it, go here: https://www.zurich.ibm.com/snapml/
TensorFlow Large Model Support (TFLMS)
TFLMS is a Python graph editing library for Large Model Support (LMS) in TensorFlow that provides an approach to training large models, data, and batch sizes that cannot normally be fit in to GPU memory. It takes a computational graph defined by users, and automatically adds swap-in and swap-out nodes for transferring tensors from GPUs to the host and vice versa. During training and inferencing this makes the graph execution operate like operating system memory paging. The system memory is effectively treated as a paging cache for the GPU memory and tensors are swapped back and forth between the GPU memory and CPU memory.
For more information:
- See the TFLMS README on GitHub: https://github.com/IBM/tensorflow-large-model-support/blob/master/README.md
- See “Getting started Tensorflow” topic in the IBM Knowledge Center for more information.
- Read this blog, TensorFlow Large Model Support Case Study with 3D Image Segmentation
PowerAI helps you get started faster with your deep learning development. Here are some tips for using PowerAI to add deep learning and AI to your application.
Ready your tools
Follow these simple steps to get your PowerAI-based application development started.
PowerAI deploys on a system far more rapidly than manual installation of frameworks. Start with:
- Red Hat Enterprise Linux (RHEL) 7.5
- NVIDIA CUDA SDK
- NVIDIA GPU Driver for Linux
The PowerAI binaries are available as RPM packages and run on IBM Power Systems S822LC and AC922. See the PowerAI release notes for more information.
Test your frameworks
Once youâ€™ve deployed PowerAI, you can test each of the Deep Learning training frameworks. Each framework included in PowerAI is unique and selecting a preferred framework for your application is important.
The integrated installer for PowerAI means you have everything installed and performant, so you can rapidly try examples in each and select suited to your preferences.
Devise your approach and start training
Collecting great input data to train on is critical to your modelâ€™s success. Donâ€™t neglect existing data inside your organization or consider training on external datasets. Your data can be visual, audio, text, or beyond.
Packages like TensorFlow in PowerAI incorporate tools to help make your training network design even easier.
Ideas to get you started
Here are some recommendations for how to add deep learning to your applications.
- Layer Deep Learning atop your existing data-store
Tease out value from your existing data by applying deep learning as a technique for advanced analysis.
- Reshape or augment an existing business process
Augment human insight or manual labor with machine intelligence. Use deep learning to train a visual or audio recognition system that helps guide decisions.
- Apply deep learning before HPC simulation
Improve the quality of your HPC simulation runs by using deep learning to identify which kinds of simulations to run or run first. Then run those high-likelihood simulations with greater precision.
- Apply deep learning after HPC simulation
Drowning in data after your HPC simulations run? Sift through existing unstructured data or vast outputs of a simulation with Deep Learning and gain new insights rapidly.
If you’re already working on other hardware…
- Use PowerAI to get started faster, with all of the performance benefits of IBM Power
- Use PowerAI to compare frameworks
Education and tech resources
Courses and learning paths
- Using GPUs to Scale and Speed-up Deep Learning
In this course, by edX.org, you will learn how to use accelerated GPU hardware to overcome the scalability problem in deep learning. Get started now
Data science and cognitive computing courses
Build your Deep Learning skills, for free, with this learning path from cognitiveclass.ai.
Deep Learning Fundamentals
A great introduction to deep learning concepts, the different kinds of neural networks, and a non-exhaustive catalog of some of the critical frameworks.
Deep Learning with TensorFlow
Begin to practice Deep Learning by learning how to operate with TensorFlow, a key framework included in PowerAI from our collaborators at Google.
Accelerating Deep Learning with GPU
This course allows you to discover for yourself the value of the POWER architecture and GPU for Deep Learning workloads. Through hands on exercises, the advantage of POWER + GPU becomes obvious.
- Deep Learning Fundamentals
Deep Learning at Udacity
Udacity offers the world’s most popular courses on Deep Learning. Longer form, but with deep educational rewards, this is the place to invest in your deep learning skills. Then, return to apply these skills with PowerAI.
Deep Learning, by Google
Worldwide experts at Google and the Google Brain project deliver this well-structured long-term course. With a particular emphasis on TensorFlow, you’ll learn skills and complete a multitude of assignments that are similar to real-world problems.
Self-Driving Car Nanodegree Program
NVIDIA has partnered with Udacity to offer world-class curriculum, expert instructors, and exclusive hiring opportunities in this self-driving car program.
- Deep Learning, by Google
- NVIDIA Deep Learning Institute (DLI)
DLI offers hands-on training for developers, data scientists, and researchers looking to solve challenging problems with deep learning. Learn techniques for designing, training, and deploying neural networks for your domain. Explore common open source frameworks and NVIDIAâ€™s latest GPU-accelerated deep learning platforms. Get Started with DLI labs
- Classify images with PowerAI Enterprise
In this tutorial, you will be performing a basic computer vision image classification example using the Deep Learning Impact function within PowerAI Enterprise.
- PowerAI DDL(Research paper)
This research paper, published on the Cornell University Library, presents a software-hardware co-optimized distributed Deep Learning system that can achieve near-linear scaling up to hundreds of GPUs.
- TensorFlow Large Model Support Code / Pull Request
This PR proposes a new module, namedlms, in contrib, which helps TensorFlow with training large models that cannot be fit into GPU memory.
- TFLMS: Large Model Support in TensorFlow by Graph Rewriting (Research paper)
While accelerators such as GPUs have limited memory, deep neural networks are becoming larger and will not fit with the memory limitation of accelerators for training. In this research paper, the authors propose an approach to tackle this problem.
- TensorFlow Large Model Support Case Study
This blog describes how combining TFLMS with AC922 servers and their NVLink 2.0 connected GPUs allows data scientists to quickly iterate while training with large models and data.
- Understanding and Optimizing the Performance of Distributed Machine Learning Applications on Apache Spark (Research paper)
In this paper, the authors explore the performance limits of Apache Spark for machine learning applications.
- Large-Scale Stochastic Learning using GPUs (IBM Research paper)
In this paper, members of the IBM Research team in Zurich propose an accelerated stochastic learning system for very large-scale applications. Acceleration is achieved by mapping the training algorithm onto massively parallel processors.
- Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems (IBM Research paper)
In this paper, the authors propose a generic algorithmic building block to accelerate training of machine learning models on heterogeneous compute systems.
- Snap ML: A Hierarchical Framework for Machine Learning
In this paper, the authors describe a new software framework for fast training of generalized linear models. The framework, named Snap Machine Learning (Snap ML), combines recent advances in machine learning systems and algorithms in a nested manner to reflect the hierarchical architecture of modern computing systems.
- IBM PowerAI: Deep Learning Unleashed on IBM Power Systems Servers (IBM Redbook)
This book provides an introduction to AI and deep learning, IBM PowerAI, and components of IBM PowerAI.
- Machine Learning/Deep Learning performance on IBM Power Systems
Review the machine learning / deep learning performance claims.
- Tracking the Millennium Falcon with Tensorflow
Learn about using PowerAI + Watson to track the Millenium Falcon
- Bringing the Power of Deep Learning to More Data Scientists
Unlock new analytical insights with PowerAI enterprise software distribution and the Data Science Experience.
- IBM PowerAI: Deep Learning Unleashed on IBM Power Systems Servers
This IBM Redbooks publication provides an introduction to working with data and creating models with IBM PowerAI and its components.
- Machine Learning/Deep Learning performance on IBM Power Systems
IBM Power Systems deliver superior price-performance over x86 competitors. Review all the machine learning/deep learning performance claims and proof points.
ParallelForAll: Deep Learning
All posts about Deep Learning. See the code or software at work for many Deep Learning challenges.
ParallelForAll: Complete Archive
Looking to bring GPUs elsewhere to your application? Get inspired by these deep examples.
Install TensorFlow on Power systems
This tutorial will demonstrate installation of TensorFlow master code on a Power8 server with Ubuntu 16.04, Python 3.5 and NVIDIA CUDA support.
Install NVIDIA CUDA and cuDNN on Power systems
This tutorial explains how to verify whether the NVIDIA toolkit has been installed previously in an environment. It also provides instructions on how to install NVIDIA CUDA on a POWER architecture server.
Containerize PowerAI with nvidia-docker
Learn how to build and run Dockerized deep learning analytics using PowerAI libraries on an IBM Power System S822 for High Performance Computing (â€śMinskyâ€ť) system with GPUs.
NVIDIA Blog: AI
See the real world application of AI throughout industry, academia, and other domains.
Bringing the Deep Learning Revolution into the Enterprise (IBM Edge)
Learn from IBM’s Chief Engineer for Deep Learning about how it can apply to your problems.
Weekly NVIDIA AI Podcast
Get your weekly dose of the latest in AI through this podcast that you can listen to on the go.
- And even more technical resources
PowerAI and deep learning blogs
Read what the experts are saying about deep learning with IBM PowerAI and IBM PowerAI vision.
There are few steps that needs to be performed to use Snap ML in DSXL environment. These are outlined as follows: Go to the IBM DSXL web console. Login with your username and password. On the IBM DSXL homepage dashboard, click Add project. Once you click Add project, a new window named Create Project appears...
What is Large Model Support? Deep Learning is a rapidly evolving field under the umbrella of Artificial Intelligence. This segment of AI has already demonstrated the capability to solve a variety of problems in Computer Vision, Natural Language Processing, Video and Text Processing. Deep Learning neural networks consists of multiple hidden layers and the number...
Introduction Advancements in the field of Deep Learning are creating use cases that require larger Deep Learning models and large datasets. One such use case is the MRI image segmentation to identify brain tumors. Training such models increases the memory requirements in the GPU. However, the GPUs are limited in their memory capacities. The latest...
Working with Snap ML in PowerAI Enterprise 1.1.2 Spectrum Conductor in PowerAI Enterprise 1.1.2 provides capability to setup Spark cluster automatically. To execute an application using snap-ml-spark APIs in Spectrum Conductor environment in IBM PowerAI Enterprise, either run snap-ml-spark application through spark-submit in PowerAI Enterprise OR enable snap-ml-spark APIs inside Jupyter Notebooks in PowerAI Enterprise...
Challenges in Energies and Utilities Transmission towers and substations form core infrastructure elements that ensure efficient supply of power across the country. Power lines span several thousands of miles, delivering energy to several substations before reaching to their consumers. A typical transmission tower has a variety of components like conducting wires, insulators, bird guards, marker...
At IBM, we took a tangential approach to empower subject matter experts with tools to train models for AI solutions. Imagine an radiologist who understands anomalies from MRI and xRays with the ability to train models to integrate AI into their practice. AI in radiology is transforming health care with MRI machines can study the...
Overview PowerAI 1.5.3 supports Caffe as one of Deep learning frameworks. Caffe is the system default version of PowerAI. It actually contains two variations: Caffe BVLC â€“ It contains upstream Caffe 1.0.0 version developed by Berkeley Vision and Learning Center(BVLC) and other community contributors.Berkeley Vision and Learning Center is renamed as BAIR (Berkeley Artificial Intelligence...
Introduction Large Model Support (LMS) is a feature provided in IBM Caffe that allows the successful training of deep learning models that would otherwise exhaust GPU memory and abort with “out of memory” errors. LMS manages this oversubscription of GPU memory by temporarily swapping tensors to host memory when they are not needed. IBM POWER...
IBM PowerAI Distributed Deep Learning (DDL) can be deployed directly into your enterprise private cloud with IBM Cloud Private (ICP). This blog post explains how to do that using TCP or InfiniBand communication between the worker nodes. We will use the command line interface, however the Web interface could also be used for most of...
The 1.5.3 release of PowerAI includes updates to IBM’s Distributed Deep Learning (DDL) framework which facilitate the distribution of Tensorflow Keras training. In this article we will walk through the process of taking an existing Tensorflow Keras model, making the code changes necessary to distribute its training using DDL and using ddlrun to execute the...
Connect and collaborate
Ask a question, contribute to the conversation, and meet the IBM PowerAI team: