Installing TensorFlow 2.0

Installing TensorFlow with optional GPU support.

CPU install

CPU installation is a very simple

pip install tensorflow

GPU installation

For GPU, you’ll require at the very least CUDA. To see which TensorFlow your OS is compatible with, be sure to check this list.

For a native installation you will also require cuDNN, which I’ve written notes for Debian here.

If everything is correctly set up, you can just use

pip install tensorflow-gpu

to install GPU supported Tensorflow.

Docker

As is recommended in the installation guide, you can also use a Docker image with cuDNN and TensorFlow GPU preinstalled. There are many images available in the docker hub, and personally I use

docker pull tensorflow/tensorflow:latest-gpu-jupyter

Which I start with

docker run -it \
    --rm \
    --gpus all \
    -v /path/to/notebooks:/tf/notebooks \
    -v /path/to/.jupyter:/root/.jupyter/ \
    -p 8888:8888 \
    tensorflow/tensorflow:latest-gpu-jupyter

NB: You will more than likely need the GPU container runtime and runtime hook: the long-and-short of it is, once the drivers have been installed, create a script nvidia-container-runtime-script.sh with contents

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update

Execute with

sh nvidia-container-runtime-script.sh

You can check GPU Docker and nVidia driver installation validity with

docker pull nvidia/cuda:[version]

You may need to fetch your specific CUDA version image tag, which you can see with

nvcc --version

Run the container

docker run --gpus all --rm nvidia/cuda:[version]

My version combination is nvidia/cuda:10.1-cudnn7-devel-ubuntu16.04.

See here for troubleshooting some nVidia docker images.

Validation

We can verify that TensorFlow has correctly identified the GPU with two Python lines

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

which should print the name of your GPU, and the PCI slot it is mounted in.