IBM PowerAI is a powerful deep learning (DL), machine learning (ML) and AI platform that can be run in a distributed environment using IBM Spectrum Conductor with Spark.
Using PowerAI with IBM Spectrum Conductor with Spark on your IBM Power Systems machines helps to better manage your resources, seamlessly simplifies driver dependencies via Docker engine, and saves you time when running or rerunning containerized PowerAI workloads.
Using IBM Spectrum Conductor with Spark makes it easy to:
- Run a GPU based workload that has several different types of workload (Tensorflow, Caffe, Torch) which have different resource requirements
- Have a multitenant environment with multiple levels of code (dev, test, prod)
- Run across heterogeneous driver / toolkit environments – agnotic to driver
By utilizing PowerAI with IBM Spectrum Conductor with Spark you can:
- Run ML/DL workloads in a dockerized context we are dissociating the coupling between driver and toolkit environments. This facilitates host level updates such as driver updates without having impact on the tenant images.
- The appropriate driver is mounted from the host to the image at runtime which allows for seamless user experience.
Most importantly, the advantage over other solutions is that you have a single pane to manage all types of workloads, not just GPU. This helps eliminate silos and maximize your compute assets. Workload scheduling capabilities to accommodate heterogeneous workloads (ie: training vs. inference) and support for multiple Spark, DL frameworks, and notebook versions.