page-brochureware.php

IBM PowerAI Vision

Enterprise-grade suite of tools for labeling raw datasets for training, creating, and deploying deep learning-based vision models

IBM PowerAI Vision 1.1.5

Released: 11/15/2019

Overview

IBM PowerAI Vision is a new generation video/image analysis platform that offers built-in deep learning models that learn to analyze images and video streams for classification and object detection.

PowerAI Vision includes tools and interfaces that allow anyone with limited skills in deep learning technologies to get up and running quickly and easily. And because PowerAI Vision is built on open source frameworks for modeling and managing containers it delivers a highly available platform that includes application life-cycle support, centralized management and monitoring, and support from IBM.

What’s new?

IBM Developer learning path

Getting started with PowerAI Vision
This learning path is designed for developers interested in quickly getting up to speed on what PowerAI Vision offers and how to use it. The learning path consists of step-by-step tutorials, deep-dive videos, and complete examples of working code.

Technology updates and features

PowerAI 1.1.5 builds upon previous releases and includes the following updates and features:

  • Support for DICOM images
    The DICOM format is a widely used standard for processing medical images.
  • Red Hat OpenShift support
    You can now install PowerAI Vision on an OpenShift cluster.
  • Integration with Maximo® Asset Monitor
    Maximo Asset Monitor is a cloud service that enables users to remotely monitor devices at the edge. This integration allows PowerAI Vision to send inference results to the Maximo Asset Monitor cloud platform for further analysis. See Integrating PowerAI Vision with Maximo Asset Monitor for more information.
  • SSD model support
    SSD models are suitable for real-time inference but are not as accurate as Faster R-CNN. For more information, see Training a model for more information.
  • GoogLeNet and tiny YOLO V2 model Core ML support
    See Training a model for more information.
  • TensorRT support
    Single Shot Detector and Faster R-CNN models are now enabled for TensorRT. See Training a model for more information.
  • Python 3 support for custom models
    Custom models must conform to Python 3. Any trained custom models from releases prior to Version 1.1.5 will not work if the custom model only supports Python 2. For more information about custom models, see Preparing a model that will be used to train data sets in PowerAI Vision.
  • PyTorch custom model support
    Imported custom models can now be PyTorch or TensorFlow based.
  • Multiple improvements to the user interface

For all the details, see the What’s New topic in the IBM Knowledge Center.

Key features

Give it a try

PowerAI Vision code patterns, tutorials, and learning paths

Check out these real world examples and tutorials that highlight PowerAI Vision in action.

Additional resources to help you get started

Requesting enhancements for IBM PowerAI Vision

The IBM Request for Enhancement (RFE) tool is now available for you to submit formal enhancement requests to the PowerAI Vision development team. One of the benefits of using the RFE tool is that other clients can vote on submitted requirements, which helps IBM to prioritize requests.

Get started

Go here get started: ibm.biz/vision-rfe

The RFE for PowerAI Vision pages are part of IBM Developer and require that you sign in with an IBM ID to submit or vote on a request. You should make sure that your IBM ID profile includes your current company and your email address to ensure that we can contact you if we have questions.

Search first

Once on the RFE page, click on the “Search” tab to view existing requests before you submit a new request. It is much more useful to vote for a previously submitted request than to submit a duplicate request.

Privacy Preference Center

Close your account?

Your account will be closed and all data will be permanently deleted and cannot be recovered. Are you sure?