IBM PowerAI developer portal

Learn about deep learning and PowerAI. Create something amazing.

Frequently asked questions

Find answers to some of the most frequently asked questions about deep learning and PowerAI.

What is PowerAI?

PowerAI is an enterprise software distribution of popular open-source deep learning frameworks pre-packaged for easier use. It has been specially compiled and optimized for the IBM Power platform. PowerAI greatly eases the time, effort, and difficulty associated with getting a deep learning environment operational and performing optimally. Learn more here:

PowerAI includes:

  • Enterprise ready SW distribution built with open source packages and frameworks such as, Tensorflow and Caffe
  • Performance optimization for faster training times
  • Tools for ease of development

What is the current release and where can I get it?

Note: PowerAI Enterprise has been rebranded as IBM Watson Machine Learning Accelerator.

IBM Watson ML Accelerator V1.2, which became generally available on March 22, 2019, delivers improved and updated integration to expand its machine learning and deep learning workloads.

PowerAI 1.6.0 became generally available on March 15, 2019. See the PowerAI Releases page for more information about PowerAI 1.6.0 and where to get it.

Watson Machine Learning Accelerator 1.2 became generally available on March 22, 2019. There are several ways to get WML Accelerator 1.2:

  • Install an evaluation version of WML Accelerator to give it a try. If you don’t already have one, you’ll need to register for an IBMID to access the evaluation.
  • Order it from your IBM representative or authorized Business Partner.

See the WML Accelerator releases page for more information

I have access to a Power server but it’s not equipped with GPUs. Can I test drive PowerAI on it?

No it is not possible to run PowerAI without access to GPUs and the associated NVIDIA libraries. PowerAI is optimized to leverage the unique capabilities of IBM Power Systems accelerated servers, and is not currently available on any other platforms. It is supported on:

  • IBM Power System AC922 with NVIDIA Tesla V100 GPUs
  • IBM Power System S822LC with NVIDIA Tesla P100 GPUs

Are there any other major frameworks in plan?

The PowerAI team is continuously evaluating additional frameworks as part of our participation in the rapidly evolving deep learning ecosystem. As part of this evaluation, it is immensely helpful to understand specific client requirements and the relevant opportunity details. Please share details of these requirements directly with the offering team (

What is the support scenario for PowerAI?

IBM offers formal support for PowerAI components as long as their versions are consistent with the release configuration. If you choose to use a different version of any of the components, no formal support will be available. However, in keeping with industry norms, specific questions can be posted on the PowerAI space in IBM Developer Answers: This forum is monitored by the IBM technical team and technical support is provided on a best effort basis.

Can PowerAI run on x86 platforms?

PowerAI is optimized to leverage the unique capabilities of IBM Power Systems accelerated servers, and is not currently available on any other platforms. It is supported on:

  • IBM Power System AC922 with NVIDIA Tesla V100 GPUs
  • IBM Power System S822LC with NVIDIA Tesla P100 GPUs

What POWER9 firmware level is required for PowerAI?

Get the latest version of firmware for POWER9 from Fix Central

Is PowerAI available on a public cloud?

In partnership with Nimbix, PowerAI on IBM Cloud service provides users with access to IBM® Power Systems™ with NVIDIA® GPUs running the PowerAI software. There are three different plans to choose from:

  • Small: Provides one PowerAI cloud instance with 1 GPU
  • Medium: Provides one PowerAI cloud instance with 2 GPUs
  • Large: Provides one or more PowerAI cloud instances with 4 GPUs each

What is Large Model Support?

IBM Caffe with Large Model Support (LMS) loads the neural model and data set in system memory and caches activity to GPU memory, allowing models and training batch size to scale significantly beyond what was previously possible.

You can enable LMS by adding -lms <size in KB> For example -lms 1000. Then, any memory chunk larger than 1000 KB will be kept in CPU memory, and fetched to GPU memory only when needed for computation. Thus, if you pass a very large value like -lms 10000000000, it will effectively disable the feature while a small value means more aggressive LMS. The value is to control the performance trade-off.

LMS uses system memory and GPU memory to support more complex and higher resolution data.

TensorFlow Large Model Support (TLMS) provides an approach to training large models, batch sizes, and data sizes that cannot fit into GPU memory. It achieves this by automatically moving tensor data between the GPU and system memory. For more information on how to enable TensorFlow Large Model Support see the README.Note that if you’re using TLMS with PowerAI and need additional information, you should check the PowerAI README.

PyTorch Large Model Support (LMS) is a feature provided in PowerAI PyTorch that allows the successful training of deep learning models that would otherwise exhaust GPU memory and abort with “out of memory” errors. LMS manages this over subscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.

See the “Getting started with PyTorch” topic in the IBM Knowledge Center for more information.

What is Distributed Deep Learning?

IBM PowerAI Distributed Deep Learning (DDL) is a MPI-based communication library, which is specifically optimized for deep learning training. An application integrated with DDL becomes an MPI-application, which will allow the use of the ddlrun command to invoke the job in parallel across a cluster of systems. DDL understands multi-tier network environment and uses different libraries (e.g. NCCL) and algorithms to get the best performance in multi-node, multi-GPU environments. DDL is currently available as a PowerAI technology preview.

Check out this performance proof-point that shows how DDL maximized research productivity by training on more images at the same time with TensorFlow 1.4.0 running on a cluster of IBM Power System AC922 servers with Nvidia Tesla V100 GPUs connected via NVLink 2.0: Distributed Deep Learning: IBM POWER9™ with Nvidia Tesla V100 results in 2.3X more data processed on TensorFlow versus tested x86 systems.

What is DSX and is it supported on the Power servers?

Data Science Experience or DSX is a complete ecosystem with open source based frameworks, libraries, and tools for scientists to develop algorithms, validate, deploy and collaborate with communities of scientists and developers. The offering allows the data scientists to develop algorithms in their most preferred language, IDE and libraries. DSX is offered on the cloud and on premise. PowerAI provides a deep learning ecosystem for data scientists and developers where frameworks like TensorFlow, Caffe, are pre-installed. Efforts are in place to deliver DSX on Power systems so that the DL based framework adds value to users of DSX. For more information about DSX, go here:

What is PowerAI Vision?

PowerAI Vision can help provide robust end-to-end workflow support for deep learning models related to computer vision. This enterprise-grade software provides a complete ecosystem to label raw data sets for training, creating, and deploying deep learning-based models. PowerAI Vision is designed to empower subject matter experts with no skills in deep learning technologies to train models for AI applications. It can help train highly accurate models to classify images and detect objects in images and videos.

PowerAI Vision is built on open source frameworks for modeling and managing containers to deliver a highly available framework, providing application lifecycle support, centralized management and monitoring, and support from IBM.

PowerAI Vision 1.1.3 is available now. See the PowerAI Vision page for more information.

How can I access PowerAI Vision?

IBM PowerAI Vision is licensed per Virtual Server. When you install it, a software license metric (SLM) tag file is created to track usage with the IBM License Metric Tool. See the “License Management in IBM License Metric Tool” topic in the IBM Knowledge Center for more information.

In addition you can:

How does IBM PowerAI Vision provide value?

IBM PowerAI Vision is designed to provide an end-to-end deep learning platform for subject matter experts (non-data scientists), application developers, and data scientists. It offers several features and optimizations that can help accelerate tasks related to data labeling, training, and deployment, such as:

  • User interface-driven interaction to configure and manage lifecycles of data sets and models
  • A differentiated capability where trained deep learning models automatically detect objects from videos
  • Preconfigured deep learning models specialized to classify and detect objects
  • Preconfigured hyper-parameters optimized to classify and detect objects
  • Training visualization and runtime monitoring of accuracy
  • Integrated inference service to deploy models in production
  • Scalable architecture designed to run deep learning, high-performance analytics, and other long-running services and frameworks on shared resources

Can IBM PowerAI Vision be used solely as a data labeling tool?

Yes, the labelled data can be exported and used as a training set in your ecosystem.