Watson Machine Learning Community Edition 1.6.2 has been released! The conda packages in the main channel have been updated and the container images on dockerhub are new. If you happen by our main channel with a browser, you will also notice WML CE now has a brand new dashboard web front end. The dashboard will help you navigate the various releases, search for available packages,and find information on each release including the latest change log and links to documentation.

Let’s take a quick look into 1.6.2.

Platform Support

GPU support and CPU support

The GPU-enabled packages in WML CE 1.6.2 have all been built against CUDA 10.1 Update 2. This is the latest version of NVIDIA’s CUDA Toolkit. The cudatoolkit and cudatoolkit-dev packages have both been updated. This release is compatible with previous CUDA 10.1 releases, so you don’t need to recompile packages built against our previous 10.1 toolkit conda package, which is nice. The entire set of GPU related packages in WML CE have been updated as well, including NCCL (2.4.8) and cuDNN (7.6.3), which offer performance enhancements and fixes. NVIDIA’s TensorRT 6 ( is also included in this release and the corresponding TensorRT features in TensorFlow and Pytorch have been enabled.

A pytorch-cpu package has been added for installations that are known to be CPU only and would rather not have to pull in the space-consuming cudatoolkit package when it not needed. A compatible torchvision-cpu package is also included. These have been added to the powerai-cpu meta-packages as dependencies, so the entire set of CPU targeted packages can be installed all at once.


All of the new packages in WML CE 1.6.2 now have versions for both Python 3.6 and Python 3.7. Support for Python 2.7 is ending upstream at the end of 2019 and since WML CE 1.6.2 will be supported past that date, we’ve opted not to include Python 2.7 packages. So it’s all Python 3.x from here on out! The Python version support to expect going forward will include packages for two versions of Python concurrently for each release. Both Python 3.6 and 3.7 package sets are still provided from the same main channel. This also means, of course, that there is no upgrade path from WML CE 1.6.1 to 1.6.2 for Python 2.7 environments. It’s sad to see Python 2 go, but it’s OK – we’ll get through this together!

Speaking of Upgrades!

WML CE 1.6.2 has updated versions of each of the included machine learning and deep learning frameworks. Upgrades to existing 1.6.1 environments work, but note that the latest versions of the conda tool (version 4.7.x) have different behavior in the package solver, so be sure to consult our Knowledge Center for upgrade suggestions in certain scenarios.

TensorFlow ecosystem updates

The TensorFlow package included in WML CE 1.6.2 has been updated to version 1.15. Please review the release notes for this TensorFlow version. One very important note on this TensorFlow release is that 1.15 will be the final 1.x release. In future releases, WML CE will be transitioning to include TensorFlow 2.x. This matches with the TensorFlow community and ecosystem, which are in transition as well. The multi-backend version of Keras created by KerasTeam is included at version 2.2.5. We do continue to recommend Tensorflow’s built-in tf.keras, as it is better supported and performs better. TensorBoard, TensorFlow Estimator, TensorFlow Probability, and TensorFlow Serving have all been updated to versions compatible with TensorFlow 1.15.

TensorFlow’s XLA feature (Accelerated Linear Algebra) has been enabled in WML CE for the first time. XLA can provide speed improvements to your training runs. The combination of TensorFlow 1.15 with XLA, Automatic Mixed Precision, tf.data for proper I/O pipelines, and an NVLink enabled server has us excited to hear what you’ll be able accomplish.

Note that TensorFlow 2.0 has officially been released upstream! If you have been tracking this release as we have, you are aware of the large usability changes that come along with 2.0. It’s an important release and we plan to make 2.0 available using the new Early Access Channel in the coming weeks. More details on the Early Access Channel below! Until then, TensorFlow 1.15 does include the 2.0 APIs, which should aid in migration efforts to 2.0. To use 1.15 in 2.0 mode, use the enable_v2_behavior() function and set the environment variable TF2_BEHAVIOR=1.

Pytorch ecocsystem updates

PyTorch has been updated to version 1.2.0. The release notes for this release can be found on github. One of the big additions in version 1.2 is the new TorchScript API. TorchScript is a new Just-In-Time (JIT) compiler for converting nn.Modules into ScriptModules. PyTorch 1.2 also features expanded ONNX export support and improved support for visualizing PyTorch models using TensorBoard. TorchText and APEX have been promoted from technical previews to fully supported add-ons. Also, TorchVision has been split out into a separate package that mirrors the upstream community packages and should help with dependency resolution.

Be sure to check the release notes for API changes, deprecations, and updates as PyTorch is evolving quickly.

IBM Distributed Deep Learning (DDL) and Large Model Support (LMS) updates

The DDL library been updated to version 1.5. The main improvements included in this release are efficiencies at scale. The DDL code was exercised extensively on the fastest supercomputer in the world and tuned for maximum performance on large clusters. DDL 1.5 also features improved LSF integration, including leveraging LSF’s built-in query support to a configured IBM Cluster System Manager(CSM) for accurate topology information. XLA support and usability improvements round out the DDL update.

The TensorFlow Large Model Support (TFLMS) package has been updated to version 2.0.2. Check out our github page for a recent set of examples for TFLMS that will help you understand and get the most out of TFLMS for your project. The “ManyModel” example in particular is a great all-in-one example set.

Machine Learning FTW

As exciting as Deep Learning (and its lightning fast evolution of the last few years) is, it’s hard to deny the production value of traditional machine learning techniques. WML CE 1.6.2 includes the best set of tools to accelerate your machine learning models. There are so many updates here from our distributed machine learning team, it’s easy to get lost in them! Stick with me, I’ll guide you through:

SparkML Estimators and Transformers have been added for Linear and Logistic Regression and SVC algorithms in SnapML. These adhere to the Spark Estimator Specification and can be used with SparkML pipelines.

Now, if you’ve been to any of our meetups, or talked with the WML CE team about SnapML, you may have heard us mention that a Snap-native gradient boosting API was on its way. I’m excited to announce that WML CE 1.6.2 includes a technical preview of SnapBoost, which builds upon the previously added Decision Tree Classifiers and provides an efficient and easy to use gradient boosting solution. SnapBoost constructs an ensemble of decision trees and can be used for both classification use cases as well as standard regression tasks.

This training procedure enables Gradient Boosted Decison Tree frameworks (“GBDTs” if you’re savvy) like SnapBoost to generalize to unseen data exceptionally well. SnapBoost models are constructed using the new pai4sk.BoostingMachine() API. Dive into the API docs for more information!

WML CE 1.6.2 has updated the included RAPIDS packages as well. The cuML package has been udpated to version 0.9.1 and now includes built-in Dask support for incredibly easy workload distribution. Dask support has also been added to cuDF (0.9.0) via the dask-cudf package. For the first time, CuPy is included as well, at version 6.2.0. All of the RAPIDS packages now have their own meta-package as well, powerai-rapids. This can be used to install the complete RAPIDS package set all at once.

The team has also updated XGBoost to version 0.90 for both ppc64le and x86-64. Dask support has also been added with the dask-xgboost package.

Additional Channels and packages

As WML CE distribution of packages evolves, it’s clear that as fast as we can go, there are still some machine learning and AI communities that are even faster. With WML CE, one of the main goals is to provide a consumable distribution for enterprise level machine learning tasks. In order for us to provide packages ahead of an official WML CE release, we’re introducing the WML CE Early Access Channel. Check here early and often for the latest framework versions to hit this channel!

We’re also introducing the WML CE Supplementary Channel. I like to say that this one “puts the C in WML CE”. (Hint: the C stands for “Community”). Packages in the Supplementary Channel are curated and tested, but they are not supported with fixes and security updates like the packages in the main channel. The channel is hosted on Anaconda.org, which is a great resource for community conda packages. It provides a great usability benefit by allowing the use of the -c option to the conda command to enable a channel.

Community contributions are welcome to the Supplementary Channel! The conda recipe for the packages in this channel are hosted on our Github.com repository and built on Travis-ci.org. If you’d like to make a contribution, be sure the to read the README for instructions, then post a PR for us to review. Not feeling up for creating your own recipe? Open an issue with a package request and we’ll investigate adding it!


The content of WML CE 1.6.2 is covered above, but we’ve also expanded our delivery methods. With WML CE 1.6.2, we’ve created images based on Red Hat’s Universal Base Image (UBI) and they are available for the first time in the Red Hat image registry. For the Red Hat registry, images are tagged in two ways: by framework version, and by WML CE version. The framework versioning references the specific version of the framework included in the image. While the WML CE tagging describes the specific version of the framework that’s included in a release of WML CE.

The Ubuntu-based images on Dockerhub.com have been refreshed as well. The tag formats for the images in Dockerhub continue to follow the powerai-version-framework-cpu-OS-python-architecture formatting. There is a new rapids image that includes the package set installed with the powerai-rapids meta-package.

There are a few small differences with the new image set to be aware of. The first involves using a non-root user wmlce as the default user. This is a basic security-driven change. The second difference involves using the non-base conda environment wmlce for the WML CE packages, which is the current recommended usage model for conda. Keeping a clean base environment is also helpful for those that wish to customize their containers by building upon our image.


And we made it through! Thanks for staying close and reading about our new release. Let us know what you think! Happy (deep) learning!

Join The Discussion

Your email address will not be published. Required fields are marked *