
Introduction
Text-to-image AI models have become hugely popular in the last couple of years. One of the most popular models is Stable Diffusion, created through a collaboration between CompVis, Stability AI and LAION. One of the easiest ways to try Stable Diffusion is through the Hugging Face Diffusers library.
With the release of the latest Intel® Arc™ GPU, we’ve gotten quite a few questions about whether the Intel Arc card supports running Tensorflow and PyTorch models, and the answer is YES! Built using the oneAPI specification, the Intel® Optimization for TensorFlow* and Intel® Extension for PyTorch allow users to run those frameworks on the latest Intel GPUs.
To help people understand how to get PyTorch up and running on an Intel Gpu, this post will present a quick walkthrough of how we get one of the more fun AI workloads in the last couple of years up and running on the Intel Arc A770 GPU.
Setup
From my previous posts, you may know I have an Intel Alder Lake Core i9–12900KF Alienware R13 system. I actually will not be using that system as the base for this walkthrough since I just put together a Raptor Lake 13th Gen Intel® Core™ i7-13700KF system using an MSI z690 Carbon WiFi motherboard with 64GB of 5600 DDR5 RAM and a brand new Intel Arc A770 for my testing. The system is running a fresh install of Ubuntu 22.04.1.

Stable Diffusion on Intel Arc
With that as the hardware base, let’s go through all the steps required to get Stable Diffusion up and running on this system.
Setting up the Base Software Stack
First, we need to install the Intel Arc drivers and the Intel® oneAPI Base Toolkit, so we follow the instructions here:
Specifically, I am using the APT instructions located here, taking special care to follow the instructions in the install the Intel GPU drivers (step 2) exactly. I tried this with the drivers included in the Linux 6.0.x kernel and ran into some issues, so I would recommend you try the DKMS instructions and use the kernel 5.17.xxx in the instructions.
Since we will be using Hugging Face repositories, we install Git and Git Large File Storage (Git LFS) as the Hugging Face repositories require them.
> sudo apt-get install git git-lfs
Python Setup
Let’s next set up our Python environment to work with Intel Arc. I’m using Python 3.9.15, so if you have a different version of Python the instructions may vary a little.
Since this was a fresh Ubuntu installation and I do not have the pip Python package manager installed for some reason, the quick fix is to run:
> wget https://bootstrap.pypa.io/get-pip.py
> python get-pip.py
There are quite a few ways to install pip, depending on your version of Ubuntu you may be able to use APT directly.
To keep our Python environment clean, we can set up the Python virtualenv module and create a virtual environment.
> python3 -m pip install --user virtualenv
> python3 -m venv venv
> source venv/bin/activate
Every step after this will be run in our Python virtual environment.
Hugging Face Setup
The next step is setting up Stable Diffusion. Mostly we are just following the Hugging Face instructions here, but I’ll inline them to make it easier.
If you don’t have a Hugging Face account, you need to go create one here:
Back on our system, we set up Hugging Face Hub and some base libraries and grab the Diffusers and Stable Diffusion code from Hugging Face:
> pip install transformers scipy ftfy huggingface_hub
> git clone https://github.com/huggingface/diffusers.git
> git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 -b fp16
These checkouts should ask you to log in using the huggingface-cli which looks like this:
Username for 'https://huggingface.co': <your_user_name>
Password for 'https://<your_user_name>@huggingface.co':
The final steps are to install some base components the Diffusers library needs and the library itself using pip and pointing it to the diffusers directory:
> pip install diffusers
PyTorch and Intel Extensions For PyTorch Setup
Finally, we need to set up our Intel GPU configuration. Download Pytorch and the Intel Extensions for PyTorch:
> wget https://github.com/intel/intel-extension-for-pytorch/releases/download/v1.10.200%2Bgpu/intel_extension_for_pytorch-1.10.200+gpu-cp39-cp39-linux_x86_64.whl
> wget https://github.com/intel/intel-extension-for-pytorch/releases/download/v1.10.200%2Bgpu/torch-1.10.0a0+git3d5f2d4-cp39-cp39-linux_x86_64.whl
And install the wheels using pip:
> source /opt/intel/oneapi/setvars.sh
> pip install torch-1.10.0a0+git3d5f2d4-cp39-cp39-linux_x86_64.whl
> pip install intel_extension_for_pytorch-1.10.200+gpu-cp39-cp39-linux_x86_64.whl
Everything should now be set up to run a PyTorch workload on the Intel Arc GPU. If you optionally want to use the low CPU memory usage mode, you can install the accelerate library. This step should be done AFTER installing the Intel wheels otherwise you will get some errors in NumPy.
> pip install accelerate
Running Stable Diffusion
On to the fun part! Stable Diffusion is ready to use through the Hugging Face Diffusers library and the Intel Arc GPU is ready to accelerate it courtesy of oneAPI. To make it easy to run, I created this simple Python script that prompts the user for input and then opens the resulting output image:
We can simply run the script, type something in and see the output:
> python run-stable-diffusion.py
Enter keywords:
AI GPU image for medium post
which output the image at the top of this post.
Conclusion
Support for PyTorch and Tensorflow on discrete Intel GPUs is here! While many are excited about how Intel GPUs may affect the gaming GPU market, there are plenty of people who use GPUs for accelerated non-gaming workloads.
Just as Intel GPU support for Blender was exciting for many, the support for the popular AI frameworks is just another waypoint for the Intel GPU story. For those of us who do rendering, video editing, AI and other compute workloads, now is the time to get excited about Intel GPUs.
If you want to see what random tech news I’m reading, you can follow me on Twitter. Also, check out Code Together, an Intel podcast for developers that I host where we talk tech.
Tony is a Software Architect and Technical Evangelist at Intel. He has worked on several software developer tools and most recently led the software engineering team that built the data center platform which enabled Habana’s scalable MLPerf solution.
Intel and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.