Accelerated Distributed Training with TensorFlow on Google’s TPU

Understand your Hardware to Optimize your Software

Sascha Kirch
Towards Data Science
11 min readJan 31, 2022

--

Cloud TPUv3 POD by Google Cloud under (CC BY 4.0)

In this post I will show you the basic principles of tensor processing units (TPUs) from a hardware perspective and show you step-by-step how you can perform accelerated distributed training on a TPU using TensorFlow to train your own models.

--

--

🚙 Expert Deep Learning @ Bosch 🤖 Collaborating Researcher @ Volograms 🎓 Lecturer Deep Learning @ UNED ⚡️ IEEE Eta Kappa Nu Nu Alpha