In 10 simple steps, including a demo

On November 9, 2015, Google open sourced a software library called TensorFlow
. TensorFlow is a software library used for Machine learning and Deep learning for numerical computation using data flow graphs. It can run on multiple CPUs and GPUs
Since machine algorithms run on huge data sets, it is extremely beneficial to run these algorithms on CUDA enabled Nvidia GPUs to achieve faster execution, due to thousands of compute cores.
Its always a bit annoying and time consuming process to get a fast and constistent data science environment running. This is a step by step guide, helping you to install a Anaconda Environment with a runable GPU Kernel in a jupyter Notebook. There are other ways of making your GPU usable for Data Science, but this a very good option for beginners and Windows-User. Why should I use my GPU to compute in data science? Read this article if you dont know yet.
Check the current Hardware Requirements
We have to start with your hardware. This is quite important, because you will need a Cuda Toolkit compatible graphics processing card, such as a Nvidia GPU. To check if your GPU is supported, follow this link: https://developer.nvidia.com/cuda-gpus and see if your GPU is on one of the lists. If you are not sure about what GPU you´ve got installed in your system, follow these steps:
- Right-click on desktop
- If you see "NVIDIA Control Panel" or "NVIDIA Display" in the pop-up window, you have an NVIDIA Gpu
- Click on "NVIDIA Control Panel" or "NVIDIA Display" in the pop-up window
- Look at "Graphics Card Information"
- You will see the name of your NVIDIA GPU
Check the Current Driver and Software Requirements
After being sure your GPU supports CUDA Toolkit we have to install some tools in order to get the GPU running with TensorFlow. As those tools are fast developing, its almost impossible to keep blogposts up to date version wise. So, I am trying to explain it as detailed as possible, without focusing on a specific version. Sounds complicated but isn’t. Our second step after the hardware check is to open the tensorflow gpu website.
A few scrolls down the page you will find a software requirements list. At the date of writing this post, the requirements are the following:

Installation of the GPU Drivers
Now open the link to the latest Nvidia GPU drivers and fill out the form with your GPU specifications, like in this example:

Then click on "SEARCH" and this page will open:

Now simply download the Drivers and install them with the standard settings during the guided installation process.
Installation of the CUDA Toolkit
Now that your GPU Drivers are up to date we can continue with the list on the tensorflow website. The next step is to install the CUDA Toolkit. At the date of writing this Blogpost, this is version 10.1, but as mentioned before, always install the version the website showing you.
To install the CUDA Toolkit follow this link and a list of Toolkit versions will open similar to this one:

Now select the version the tensorflow website shows you, in my case Toolkit 10.1. In the screenshot you can see, that there are CUA Toolkit 10.1, CUDA Toolkit 10.1 update1 and CUDA Toolkit 10.1 update2. As the website shows CUDA Toolkit 10.1, we are installing exactly this version and not one of the updates. Once you have selected the version this page will show up:

Select the parameters as shown on the screenshot. Operating System = Windows 10, Architecture = x86_64, Version = 10, Installer Type = exe (local). Then click on "Download".
Installation of cuDNN
As CUPTI is included in the CUDA Toolkit, we can ignore that link and jump right to cuDNN. Please, as mentioned before, install the version shown to you on the TensorFlow website, in my case cuDNN SDK 7.6. Follow this link to get to the cuDNN website and click on "Download cuDNN".

To being able to download CuDNN, you have to login with a Nvidia Account. If you already have on, just login, if not, create on by clicking on "Join now". Its completely free.

Once you are logged in this page will open:

Agree to the terms and if the version needed is not show as in my case, click on "Archived cuDNN Releases" and this page with all the cuDNN versions will show up:

As the TensorFlow website in my case says, I have to install cuDNN SDK 7.6, so I am downloading version 7.6.5 for CUDA 10.1 because this is the CUDA Toolkit version I had to install.

Click on "cuDNN Library for Windows 10" to start the download. To install cuDNN, first go back to the Tensorflow website and check the paths on the bottom of the page:

The last one shows were to install cuDNN. In my case C:toolscuda. So at first I am creating a folder tools in my C-Drive, by right-clicking -> new -> folder or with the shortcut CTRL+SHIFT+N and naming it "tools":


Then we are opening the zip file with cuDNN and this shows a folder "cuda", we are simply dragging into the new "tools" folder on our C-Drive and the cuDNN installation is done:


Installation of TensorRT
Now this is done, the last tool on the list on the Tensorflow Website is Tensor RT, in my case version 6.0 and we are going to install this tool as well, even tough it is marked as optional. When opening the link we will see this page:

Scroll down to point 3. Download and follow the instructions shown. I am going to select TensorRT 6 and specifically the version for Windows 10 and CUDA 10.1.


After the download has been finished, unzip it and drag the folder in the same "tools" folder an your C-Drive like the cuDNN folder before:

Update the PATH Variables
Now, to finalize the installation of all the tools shown on the Tensorflow Website, we have to update our path variables to the own shown on the bottom of the website:

To do this, open your explorer, go to "Your PC" and right-click somewhere and open "Properties".

Then open the "Advanced system settings" and then the "Environment Variables":


Scroll down to "Path" and edit it. Click on "New" and copy/paste the pahts from the TensorFlow website as I have marked it down below, so everything from C to the semicolon. Repeat this with the other three paths as well.



We have to do this with the TensorRT tool as well, and to get the path of that tool, just open your explorer. Go to the C-Drive, tools, TensorRT and bin and then click in the path bar and copy/paste that path link the other four before. Then click "ok" and close you’re windows by clicking "ok".
Installation of Anaconda
This was the last step of the instructions on the Tensorflow website. Now there is one more tool to install, if you haven’t already and this is anaconda or miniconda. I, personally, prefer Anaconda so we are going to install this by following this link.

Click on download and the page will scroll down. Select the 64-Bit Graphic Installer for Windows:

And install Anaconda with the standard setting.
Setting up the Conda Environment – Step 1
Now all the manual installation stuff is down and we can continue with the third step, setting up our conda environment to run jupyter notebook with GPU Acceleration. Therefore click on the Window Button and type in "anaconda promt" and start the prompt.


Type in the command "conda", run it with Enter and you will get a list with further commands:


But now lets install jupyter by typing "conda install jupyter" and run it with Enter. Confirm by typing "y" and run it again with Enter.


Setting up the Conda Environment – Step 2
Download this file and save it under your user directory. Therefor click on "RAW", right-click and "save as". Select all data types in the windows explorer window and save it as "tensorflow-gpu.yml".



Now go back to your Anaconda Prompt window and type in following command, creating a anaconda virtual environment with all necessary python packages, listed in the file. If you need more packages or other packages you either could edit the .yml file or install them later in the jupyter notebook.
conda env create -v -f tensorflow-gpu.yml

After the command you should get following response:

Type in "conda env list" to get a list of all your conda enviroments:

To select the one we have created, just type in following command:
_conda activate tensorflow_gpu_2_10


And finally copy following, very long command, to start a jupyter server:
_python -m ipykernel install – user – name tensorflow – display-name "tensorflow_gpu_python_37"


TL;DR: Testing your environment and GPU Performance
Now you are able to start jupyter lab via the command "jupyter lab" and select the tensorflow gpu kernel. If you want to test the speed of your GPU vs. your CPU, just download this notebook.
Thank you very much for reading this blogpost! If you have any updates or recommendations, feel free to post them below.