Taxonomy Icon

Artificial Intelligence

PyTorch is a relatively new deep learning framework. Yet, it has begun to gain adoption especially among researchers and data scientists. The strength of PyTorch is its support of dynamic computational graph while most deep learning frameworks are based on static computational graph. In addition, its strong NumPy like GPU accelerated tensor computation has allowed Python developers to easily learn and build deep learning networks for GPUs and CPUs alike.

Learning objectives

This tutorial will demonstrate how to compile and install PyTorch from source code on a PowerPC 64 architecture server.

Note: IBM provides official Docker images for IBM PowerAI which has many popular deep learning frameworks pre-installed. If the versions of PyTorch provided by IBM docker images work for you, that may be the easiest way to bring up a PyTorch environment on a PowerPC 64 architecture server.

Prerequisites

This tutorial will use a Power9 server with the following configuration:

  • Operating System: Ubuntu 16.04.
  • 32-thread POWER9.
  • 128 GB RAM.
  • 1 V100 Tesla GPU with NVLink.
  • NVIDIA CUDA 9.2.148 and driver version 396.44.

Estimated time

  • Magma package compile time is approximately 10 minutes.
  • PyTorch installation time is approximately 25 minutes.
  • Total compile and install time is approximately 45 minutes.
  • Test time is approximately 30 minutes.

Steps

1. Download and install Miniconda (optional)

We will now install Anaconda, which is a Python distribution, by downloading and running the installer.

cd ~
wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-ppc64le.sh
chmod 744 Miniconda3-latest-Linux-ppc64le.sh
./Miniconda3-latest-Linux-ppc64le.sh

Follow the instructions displayed to accept the license, set the installation path and add the conda install location to PATH in your ~/.bashrc file. We recommend do not add the conda install location to PATH in your ~/.bashrc file.

2. Create a virtual environment (optional)

Though this is an optional step, using a virtual environment will help keep your python projects isolated on a single server. The following commands will create an environment named pytorch with Python 3.6.

~/miniconda3/bin/conda create -n pytorch python=3.6
source ~/miniconda3/bin/activate pytorch

3. Verify and enable CUDA (GPU) support

Before compiling PyTorch, we need to enable CUDA support. Please see Verify the NVIDIA CUDA toolkit and driver for intructions.

In this example, the NVIDIA driver is 396.44:

$ nvidia-smi
Thu Dec  6 17:16:05 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.44                 Driver Version: 396.44                    |
|-------------------------------+----------------------+----------------------+

We use nvidia-396 in the following command. Make sure to use your version of the NVIDIA driver:

export PATH="/usr/lib/nvidia-396/bin/:$PATH"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64

Compiling PyTorch requires some CUDA development packages. If your environment does not have those packages installed, you can install them with the following commands.

Note: Make sure to use your CUDA version.

# check CUDA version
$ cat /usr/local/cuda/version.txt
  CUDA Version 9.2.148  

sudo apt-get update
sudo apt-get install cuda-*-dev-9-2

4. Install the cuDNN library (optional)

Compiling PyTorch does not require the NVIDIA cuDNN library, which is a GPU-accelerated library of primitives for deep neural networks. However, you will get warnings when running unit tests that PyTorch was compiled without cuDNN support.

Installing cuDNN needs to use the correct version of the libcudnn library depending on the installed CUDA version. See Install the cuDNN library for instructions to install cuDNN.

In this example, cuDNN was not installed. We will install cuDNN with the following commands:

echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/ppc64el /" | sudo tee /etc/apt/sources.list.d/cudnn.list
curl -L http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/ppc64el/7fa2af80.pub | sudo apt-key add -

sudo apt-get update

# make sure to use the correct libcudnn version for your environment
sudo apt-get install libcudnn7=7.2.1.38-1+cuda9.2 libcudnn7-dev=7.2.1.38-1+cuda9.2

Set environment variables so that PyTorch will be compiled with cuDNN support.

# check for the location of the libcudnn.* file and use it in the command below
export CUDNN_LIB_DIR=/usr/lib/powerpc64le-linux-gnu/

# check for the location of the cudnn.h file and use it in the command below
export CUDNN_INCLUDE_DIR=/usr/include/

5. Install the Magma library (optional)

Currently, Magma is listed as an optional dependency. However, a number of unit tests will result in runtime errors if it is not installed, so installing Magma is recommended. The Magma library is a dense linear algebra library, it is used by PyTorch with CUDA. The following instructions will compile and install both static and dynamic libraries for Magma with OpenBLAS. There is support for different configurations such as ACML and ATLAS.

Check for the latest Magma package and download its tar file:

Note: At the time this tutorial is updated, the latest version is magma-2.4.0.

cd ~
wget http://icl.utk.edu/projectsfiles/magma/downloads/magma-2.4.0.tar.gz
tar xzf magma-2.4.0.tar.gz
cd magma-2.4.0

Install gfortran as a dependency:

sudo apt-get update
sudo apt-get install gfortran

Install OpenBLAS

conda install openblas

Copy the make.inc.openblas to make.inc in order to compile with openblas:

cp make.inc-examples/make.inc.openblas make.inc

Set the necessary environment variables:

export CUDADIR=/usr/local/cuda
export OPENBLASDIR=~/miniconda3/envs/pytorch
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$OPENBLASDIR/lib

Finally, build Magma from the cloned project directory:

make

Note: Compilation time can be speed up by adding the -j option. For example, make -j32, if you have a 32-thread Power8 and would like to use all threads for compilation:

make -j<number of cpu threads for parallel compiling>

Now we can install Magma, the install prefix can be changed if needed. The default prefix is /usr/local/magma. Modify OPENBLASDIR and CUDADIR if needed:

sudo -E make install prefix=/usr/local/magma

Make sure that Magma was installed correctly by checking for libmagma.a, libmagma.so, libmagma_sparse.a and libmagma_sparse.so under /usr/lib/magma/lib:

ls -l /usr/local/magma/lib

When compiling or using PyTorch, make sure the Magma library is listed under in the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/usr/local/magma/lib:$LD_LIBRARY_PATH

6. Install the optional dependency

Install Ninja dev libraries. Ninja is alternative build system than make. It is needed for the PyTorch cpp extensions unit tests to work and also needed when developing PyTorch for dynamic rebuild of the C++ code.

# Install Ninja build system
sudo apt-get install ninja-build

7. Install PyTorch from source

Option 1: Using conda

Note: Instructions from this section are based on those from https://github.com/pytorch/pytorch#from-source

Install the dependencies:

export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
conda install numpy pyyaml setuptools cmake cffi

Note: The mkl package which is a math library for Intel and Intel compatible processors, it is not needed for Power environments.

Clone the PyTorch project:

cd ~
git clone --recursive https://github.com/pytorch/pytorch ~/pytorch

(Optional) Alternative MAGMA or CUDA library locations can be specified by modifying MAGMA_HOME, CUDA_BIN_PATH and CUDA_HOME environment variables.

Install PyTorch from source:

cd ~/pytorch
python setup.py install

Option 2: Using pip

Install pip and its dependencies

pip install certifi cffi numpy setuptools wheel pip pyyaml

Note: A few files such as certifi, wheel, setuptools and pip may have been installed by conda when creating the virtual environment.

Clone the PyTorch project:

cd ~
git clone --recursive https://github.com/pytorch/pytorch ~/pytorch

(Optional) Alternative MAGMA or CUDA library locations can be specified by modifying MAGMA_HOME, CUDA_BIN_PATH and CUDA_HOME environment variables.

Build the PyTorch wheel:

cd pytorch
python setup.py bdist_wheel

A file named torch*.whl will be created in the dist directory. This file will be used to install PyTorch with pip.

Note: Compilation time may take about 25 minutes.

Install PyTorch with pip:

Get the exact name of the wheel file for PyTorch installation.

pip install dist/<torch wheel file name>.whl

8. Test the installation

Run the PyTorch unit tests to ensure PyToch is installed correctly:

# Install hypothesis package which is needed by the expecttest test
pip install hypothesis

cd test
python run_test.py

Note: PyTorch is very active with frequent updates. If you install from PyTorch master and encounter unit test failures, be sure to check the issue list as some newly merged pull requests may affect the test results.

9. Run PyTorch examples

Clone PyTorch Examples repository from github

cd ~
git clone https://github.com/pytorch/examples.git ~/examples

Run the different examples (every folder under the main repository is separate example) by following the instructions in the README.md. For example running the word_language_model example:

cd ~/examples/word_language_model
pip install -r requirements.txt
python main.py --cuda --epochs 6        # Train a LSTM on PTB with CUDA, reaching perplexity of 117.61
python main.py --cuda --epochs 6 --tied # Train a tied LSTM on PTB with CUDA, reaching perplexity of 110.44
python main.py --cuda --tied            # Train a tied LSTM on PTB with CUDA for 40 epochs, reaching perplexity of 87.17
python generate.py                      # Generate samples from the trained LSTM model.

Summary

In this tutorial, we described the steps to compile and install PyTorch from the source code on a PowerPC 64 architecture server with NVIDIA GPU support.