Motivation: As I contemplated whether to pay the $10 upgrade fee for Colab Pro, my main concern was "Is the GPU acceleration worth the monthly fee?" Sadly, I wasn’t able to find a good source to answer this question, so I wrote this story to help people like me.😉
Should I upgrade to Colab Pro? If you Google this question, you’ll find a lot of great articles. For example:
- GPU, RAM, high-level training time comparison by Dario Radečić: https://towardsdatascience.com/colab-pro-is-it-worth-the-money-32a1744f42a8
- Colab Pro main features: https://colab.research.google.com/signup
- Reddit discussion: https://www.reddit.com/r/MachineLearning/comments/f0wm0l/n_colab_pro_more_ram_longer_run_time_faster_gpus/
To bring something new to the ongoing discussion, I’ll use this article to dive deeper into their differences in AI computing performance in the context of Google Colab.
Background
To compare the AI computing performance of different Colab tiers, it’s not sufficient to look at the hardware specifications alone. The best way is probably to run actual, fixed AI models on both Pro and Free versions and see their runtime of completing fixed training jobs.
Comparison Tool
For the comparison, we use the AI benchmark, an open-source tool from Github. This tool was developed by a group of ETH Zurich researchers, and it works by running the code below. When we execute the code, the tool collects the training and inference time of 19 different machine learning models on specific datasets.
Scoring
In addition to time, the Benchmark evaluates the training, inference, and overall AI performance in terms of pre-defined scores. Even though the tool didn’t explicitly explain the score formula, I looked into its source code and summarized the formula below.

Although it is the formula for the training score, the inference score is calculated similarly, then their sum is used as the overall AI score. I assume you may want to pay more attention to the training performance aspects in the following content since Colab is more used as a research tool instead of deployment.
Comparison
The benchmark was conducted on:
- Colab free with CPU only (Experiment link)
- Colab free with T4 (Experiment link)
- Colab pro with CPU only (Experiment link)
- Colab pro with P100 (Experiment link)
- Colab pro with V100 (Experiment link)
Colab was supporting K80 in the free version, but it hasn’t been seen for a while so it is not included. As for P100 and V100, they are randomly given by Colab Pro. Usually, P100 is granted a bigger chance.

Note: for ease of observation, not all information is included in the chart. Those models are trained by different datasets so comparing runtime across models won’t be very helpful.
Training Score Rankings
- Colab pro with V100 – 16289 scores
- Colab pro with P100 – 11428 scores
- Colab free with T4 – 7155 scores
- Colab free with CPU only—187 scores
- Colab pro with CPU only — 175 scores
Observation
I created this google sheet to include more details. From there, you can have the following observations:
- On average, Colab Pro with V100 and P100 are respectively 146% and 63% faster than Colab Free with T4. (source: "comparison" sheet, table E6-G8)
- Even though GPUs from Colab Pro are generally faster, there still exist some outliers; for example, Pixel-RNN and LSTM train 9%-24% slower on V100 than on T4. (source: "comparison" sheet, table C18-C19)
- When only using CPUs, both Pro and Free had similar performances. (source: "training" sheet, column B and D)
Conclusion
If you regard long training time as a pain point during research, I would highly recommend you upgrade to Colab Pro. For example, I found myself spending over 20 hours per month on Colab, and nearly 50% of the time was spent on model training. Therefore, Colab Pro saved me over 20*0.5=10 hours every month. Whereas, if I were not a Colab heavy user or had a different place for accelerating model training, using this $10 to buy myself a Hawaiian Piazza seems to be a better idea.
Again, this article only focuses on AI computing performance. You should definitely consider other aspects such as RAM, runtime, and so forth before making decisions.
Free 4 x V100 GPUs
If you are looking for more free GPUs resources, I would like to introduce AIbro, a Serverless model training tool, that my friends and I are currently working on. It helps you train AI models on AWS with one line of code. To collect more feedback from early adopters, we are now providing you access to p3.8xlarge (4 x V100) for free. For more details, you may want to check out my previous story:
