Join us for Code @ Think 2019 | San Francisco | February 12 – 15 Register now Limited availability
By Catherine Diep, Simeon Monov, Peter Tan | Published November 30, 2017 - Updated December 10, 2018
Artificial IntelligencePythonCloudOn Premises
PyTorch is a relatively new deep learning framework.
Yet, it has begun to gain adoption especially among researchers and data
scientists. The strength of PyTorch is its support of dynamic computational
graph while most deep learning frameworks are based on static computational
graph. In addition, its strong NumPy like GPU
accelerated tensor computation has allowed Python developers to easily learn
and build deep learning networks for GPUs and CPUs alike.
This tutorial will demonstrate how to compile and install PyTorch from source
code on a PowerPC 64 architecture server.
Note: IBM provides official
Docker images for IBM PowerAI which
has many popular deep learning frameworks pre-installed. If the versions of
PyTorch provided by IBM docker images work for you, that may be the easiest way
to bring up a PyTorch environment on a PowerPC 64 architecture server.
This tutorial will use a Power9 server with the following configuration:
We will now install Anaconda, which is a Python
distribution, by downloading and running the installer.
wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-ppc64le.sh
chmod 744 Miniconda3-latest-Linux-ppc64le.sh
Follow the instructions displayed to accept the license, set the installation
path and add the conda install location to PATH in your ~/.bashrc file. We
recommend do not add the conda install location to PATH in your ~/.bashrc
Though this is an optional step, using a virtual environment will help keep
your python projects isolated on a single server. The following commands will
create an environment named pytorch with Python 3.6.
~/miniconda3/bin/conda create -n pytorch python=3.6
source ~/miniconda3/bin/activate pytorch
Before compiling PyTorch, we need to enable CUDA support. Please see
Verify the NVIDIA CUDA toolkit and driver
In this example, the NVIDIA driver is 396.44:
Thu Dec 6 17:16:05 2018
| NVIDIA-SMI 396.44 Driver Version: 396.44 |
We use nvidia-396 in the following command. Make sure to use your version of
the NVIDIA driver:
Compiling PyTorch requires some CUDA development packages. If your environment
does not have those packages installed, you can install them with the following
Note: Make sure to use your CUDA version.
# check CUDA version
$ cat /usr/local/cuda/version.txt
CUDA Version 9.2.148
sudo apt-get update
sudo apt-get install cuda-*-dev-9-2
Compiling PyTorch does not require the NVIDIA cuDNN library, which is a
GPU-accelerated library of primitives for deep neural networks. However, you
will get warnings when running unit tests that PyTorch was compiled without
Installing cuDNN needs to use the correct version of the libcudnn library
depending on the installed CUDA version.
See Install the cuDNN library
for instructions to install cuDNN.
In this example, cuDNN was not installed. We will install cuDNN with the
echo "deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/ppc64el /" | sudo tee /etc/apt/sources.list.d/cudnn.list
curl -L http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/ppc64el/7fa2af80.pub | sudo apt-key add -
sudo apt-get update
# make sure to use the correct libcudnn version for your environment
sudo apt-get install libcudnn7=188.8.131.52-1+cuda9.2 libcudnn7-dev=184.108.40.206-1+cuda9.2
Set environment variables so that PyTorch will be compiled with cuDNN support.
# check for the location of the libcudnn.* file and use it in the command below
# check for the location of the cudnn.h file and use it in the command below
Currently, Magma is listed as an optional
dependency. However, a number of unit tests will result in runtime errors if it
is not installed, so installing Magma is recommended. The Magma library is a
dense linear algebra library, it is used by PyTorch with CUDA. The following
instructions will compile and install both static and dynamic libraries for
Magma with OpenBLAS. There is support for different configurations such as ACML
Check for the latest Magma package and download its tar file:
Note: At the time this tutorial is updated, the latest version is
tar xzf magma-2.4.0.tar.gz
Install gfortran as a dependency:
sudo apt-get update
sudo apt-get install gfortran
conda install openblas
Copy the make.inc.openblas to make.inc in order to compile with openblas:
cp make.inc-examples/make.inc.openblas make.inc
Set the necessary environment variables:
Finally, build Magma from the cloned project directory:
Note: Compilation time can be speed up by adding the -j option. For
example, make -j32, if you have a 32-thread Power8 and would like to use all
threads for compilation:
make -j<number of cpu threads for parallel compiling>
Now we can install Magma, the install prefix can be changed if needed. The
default prefix is /usr/local/magma. Modify OPENBLASDIR and CUDADIR if
sudo -E make install prefix=/usr/local/magma
Make sure that Magma was installed correctly by checking for libmagma.a,
libmagma.so, libmagma_sparse.a and libmagma_sparse.so under
ls -l /usr/local/magma/lib
When compiling or using PyTorch, make sure the Magma library is listed under in
the LD_LIBRARY_PATH environment variable:
Install Ninja dev libraries. Ninja is alternative build system than make. It is
needed for the PyTorch cpp extensions unit tests to work and also needed when
developing PyTorch for dynamic rebuild of the C++ code.
# Install Ninja build system
sudo apt-get install ninja-build
Note: Instructions from this section are based on those from
Install the dependencies:
export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
conda install numpy pyyaml setuptools cmake cffi
Note: The mkl package which is a math library for Intel and Intel compatible
processors, it is not needed for Power environments.
Clone the PyTorch project:
git clone --recursive https://github.com/pytorch/pytorch ~/pytorch
(Optional) Alternative MAGMA or CUDA library locations can be specified by
modifying MAGMA_HOME, CUDA_BIN_PATH and CUDA_HOME environment variables.
Install PyTorch from source:
python setup.py install
Install pip and its dependencies
pip install certifi cffi numpy setuptools wheel pip pyyaml
Note: A few files such as certifi, wheel, setuptools and pip may have been
installed by conda when creating the virtual environment.
Build the PyTorch wheel:
python setup.py bdist_wheel
A file named torch*.whl will be created in the dist directory. This file will
be used to install PyTorch with pip.
Note: Compilation time may take about 25 minutes.
Install PyTorch with pip:
Get the exact name of the wheel file for PyTorch installation.
pip install dist/<torch wheel file name>.whl
Run the PyTorch unit tests to ensure PyToch is installed correctly:
# Install hypothesis package which is needed by the expecttest test
pip install hypothesis
Note: PyTorch is very active with frequent updates. If you install from
PyTorch master and encounter unit test failures, be sure to check the issue
list as some newly merged pull requests may affect the test results.
Clone PyTorch Examples repository from github
git clone https://github.com/pytorch/examples.git ~/examples
Run the different examples (every folder under the main repository is separate
example) by following the instructions in the README.md. For example running
the word_language_model example:
pip install -r requirements.txt
python main.py --cuda --epochs 6 # Train a LSTM on PTB with CUDA, reaching perplexity of 117.61
python main.py --cuda --epochs 6 --tied # Train a tied LSTM on PTB with CUDA, reaching perplexity of 110.44
python main.py --cuda --tied # Train a tied LSTM on PTB with CUDA for 40 epochs, reaching perplexity of 87.17
python generate.py # Generate samples from the trained LSTM model.
In this tutorial, we described the steps to compile and install PyTorch
from the source code on a PowerPC 64 architecture server with NVIDIA GPU
Back to top