IBM PowerAI developer portal

Learn about deep learning and PowerAI. Create something amazing.

Frequently asked questions

Find answers to some of the most frequently asked questions about deep learning and PowerAI.

What is PowerAI?

PowerAI is an enterprise software distribution of popular open-source deep learning frameworks pre-packaged for easier use. It has been specially compiled and optimized for the IBM Power platform. PowerAI greatly eases the time, effort, and difficulty associated with getting a deep learning environment operational and performing optimally. Learn more here:

PowerAI includes:

  • Enterprise ready SW distribution built with open source packages and frameworks such as, Tensorflow, Keras, and Caffe
  • Performance optimization for faster training times
  • Tools for ease of development

What is the current release and where can I get it?

PowerAI 1.5.4 became generally available on November 16, 2018. It is available as a no charge orderable part number from IBM. To place an order, please contact your IBM representative or authorized Business Partner. See the PowerAI Releases page for more information about PowerAI 1.5.4 and where to get it.

PowerAI Enterprise 1.1.2 became generally availabe on November 16, 2018. There are several ways to get PowerAI Enterprise 1.1.2:

  • Install an evaluation version of PowerAI Enterprise to give it a try. If you don’t already have one, you’ll need to register for an IBMID to access the evaluation.
  • Order PowerAI Enterprise 1.1.2 from your IBM representative or authorized Business Partner.

See the PowerAI Enterprise Releases page for more information about PowerAI Enterprise 1.1.2 and where to get it.

I have access to a Power server but it’s not equipped with GPUs. Can I test drive PowerAI on it?

No it is not possible to run PowerAI without access to GPUs and the associated NVIDIA libraries. PowerAI is optimized to leverage the unique capabilities of IBM Power Systems accelerated servers, and is not available on any other platforms. It is supported on:

  • IBM Power System AC922 with NVIDIA Tesla V100 GPUs
  • IBM Power System S822LC with NVIDIA Tesla P100 GPUs

Are there any other major frameworks in plan?

The PowerAI team is evaluating additional frameworks on a case-by-case basis as part of our participation in the rapidly evolving deep learning ecosystem. As part of this evaluation, it is immensely helpful to understand specific client requirements and the relevant opportunity details. Please share details of these requirements directly with the offering team (

What is the support scenario for PowerAI?

IBM offers formal support for PowerAI components as long as their versions are consistent with the release configuration. If you choose to use a different version of any of the components, no formal support will be available. However, in keeping with industry norms, specific questions can be posted on the PowerAI space in dW Answers: This forum is monitored by the IBM technical team and technical support is provided on a best effort basis.

Can PowerAI run on x86 platforms?

PowerAI is optimized to leverage the unique capabilities of IBM Power Systems accelerated servers, and is not available on any other platforms. It is supported on:

  • IBM Power System AC922 with NVIDIA Tesla V100 GPUs
  • IBM Power System S822LC with NVIDIA Tesla P100 GPUs

What POWER9 firmware level is required for PowerAI?

Get the latests version of firmware for POWER9 from Fix Central

Is PowerAI available on a public cloud?

In partnership with Nimbix, the PowerAI on IBM Cloud service provides users with access to IBM® Power Systems™ with NVIDIA® GPUs running the PowerAI software. There are three different plans to choose from:

  • Small: Provides one PowerAI cloud instance with 1 GPU
  • Medium: Provides one PowerAI cloud instance with 2 GPUs
  • Large: Provides one or more PowerAI cloud instances with 4 GPUs each

You can also get 24-hours of free processing time on the PowerAI platform. Register now

What is Large Model Support?

IBM Caffe with Large Model Support (LMS) loads the neural model and data set in system memory and caches activity to GPU memory, allowing models and training batch size to scale significantly beyond what was previously possible.

You can enable LMS by adding -lms <size in KB> For example -lms 1000. Then, any memory chunk larger than 1000 KB will be kept in CPU memory, and fetched to GPU memory only when needed for computation. Thus, if you pass a very large value like -lms 10000000000, it will effectively disable the feature while a small value means more aggressive LMS. The value is to control the performance trade-off.

LMS uses system memory and GPU memory to support more complex and higher resolution data.

TensorFlow Large Model Support (TLMS) provides an approach to training large models, batch sizes, and data sizes that cannot fit into GPU memory. It achieves this by automatically moving tensor data between the GPU and system memory. TensorFlow Large Model Support is currently available as a technology preview. For more information on how to enable TensorFlow Large Model Support start here: README. Note that if you’re using TLMS with PowerAI and need additional information, you should check the PowerAI README.

PyTorch Large Model Support (LMS) is a feature provided in PowerAI PyTorch that allows the successful training of deep learning models that would otherwise exhaust GPU memory and abort with “out of memory” errors. LMS manages this over subscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.

See the “Getting started with PyTorch” topic in the IBM Knowledge Center for more information.

What is Distributed Deep Learning?

IBM PowerAI Distributed Deep Learning (DDL) is a MPI-based communication library, which is specifically optimized for deep learning training. An application integrated with DDL becomes an MPI-application, which will allow the use of the ddlrun command to invoke the job in parallel across a cluster of systems. DDL understands multi-tier network environment and uses different libraries (e.g. NCCL) and algorithms to get the best performance in multi-node, multi-GPU environments. DDL is currently available as a PowerAI technology preview.

Check out this performance proof-point that shows how DDL maximized research productivity by training on more images at the same time with TensorFlow 1.4.0 running on a cluster of IBM Power System AC922 servers with Nvidia Tesla V100 GPUs connected via NVLink 2.0: Distributed Deep Learning: IBM POWER9™ with Nvidia Tesla V100 results in 2.3X more data processed on TensorFlow versus tested x86 systems.

How does PowerAI relate to Watson or other public cloud AI offerings?

PowerAI and Watson are designed to ease the path for enterprises that want to start using advanced AI technologies. AI services and solutions can currently be separated into two broad sections: API-driven technologies in the cloud and PowerAI, which is a deeper layer delivered as a combination of hardware and the most popular deep learning frameworks.

API-driven technologies in the cloud are perfect for soliving specific business problems. For example, translating different languages, converting speech to text, or getting sentiment analysis. These requirements can be solved by API-driven AI solutions such as IBM Watson.

Then there is a deeper level of technology that uses proprietary data and information to produce insights, which can operate alongside API-based solutions to maximize the AI results. This deeper layer, delivered as hardware and IBM PowerAI software solution, provides developers and data scientists with the most popular machine learning frameworks to enable the rapid deployment of high-performance AI.

The two are not mutually exclusive. A PowerAI based workflow can call a cloud based API to execute a function as part of that larger workflow. We have examples of this in our PowerAI developer portal.

What is DSX and is it supported on the Power servers?

Data Science Experience or DSX is a complete ecosystem with open source based frameworks, libraries, and tools for scientists to develop algorithms, validate, deploy and collaborate with communities of scientists and developers. The offering allows the data scientists to develop algorithms in their most preferred language, IDE and libraries. DSX is offered on the cloud and on premise. PowerAI provides a deep learning ecosystem for data scientists and developers where frameworks like TensorFlow, Caffe, are pre-installed. Efforts are in place to deliver DSX on Power systems so that the DL based framework adds value to users of DSX. For more information about DSX, go here:

What is PowerAI Vision?

PowerAI Vision can help provide robust end-to-end workflow support for deep learning models related to computer vision. This enterprise-grade software provides a complete ecosystem to label raw data sets for training, creating, and deploying deep learning-based models. PowerAI Vision is designed to empower subject matter experts with no skills in deep learning technologies to train models for AI applications. It can help train highly accurate models to classify images and detect objects in images and videos.

PowerAI Vision is built on open source frameworks for modeling and managing containers to deliver a highly available framework, providing application lifecycle support, centralized management and monitoring, and support from IBM.

PowerAI Vision 1.1.2 is available now. See the PowerAI Vision page for more information.

How can I access PowerAI Vision?

IBM PowerAI Vision is licensed per Virtual Server. When you install it, a software license metric (SLM) tag file is created to track usage with the IBM License Metric Tool. See the “License Management in IBM License Metric Tool” topic in the IBM Knowledge Center for more information.

In addition you can:

How does IBM PowerAI Vision provide value?

IBM PowerAI Vision is designed to provide an end-to-end deep learning platform for subject matter experts (non-data scientists), application developers, and data scientists. It offers several features and optimizations that can help accelerate tasks related to data labeling, training, and deployment, such as:

  • User interface-driven interaction to configure and manage lifecycles of data sets and models
  • A differentiated capability where trained deep learning models automatically detect objects from videos
  • Preconfigured deep learning models specialized to classify and detect objects
  • Preconfigured hyper-parameters optimized to classify and detect objects
  • Training visualization and runtime monitoring of accuracy
  • Integrated inference service to deploy models in production
  • Scalable architecture designed to run deep learning, high-performance analytics, and other long-running services and frameworks on shared resources

Can IBM PowerAI Vision be used solely as a data labeling tool?

Yes, the labelled data can be exported and used as a training set in your ecosystem.