Google releases EfficientNetV2 — a smaller, faster, and better EfficientNet

Higher performance compared to state-of-the-art while training 5–10x faster

Mostafa Ibrahim
Towards Data Science

--

Photo by Fab Lentz on Unsplash

With progressive learning, our EfficientNetV2 significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources. Code will be available at https://github.com/ google/automl/efficientnetv2.

Source: Arxiv

EfficientNets have been the SOTA for high quality and quick image classification. They were released about 2 years ago and were quite popular for the way they scale which made their training much faster compared to other networks. A few days ago Google released EfficientNetV2 which is a big improvement over EfficientNet in terms of training speed and a decent improvement in terms of accuracy. In this article, we are going to explore how this new EfficientNet is an improvement over the previous one.

The main foundation of better performing networks such as DenseNets and EfficientNets is achieving better performance with a lower number of parameters. When you decrease the number of…

--

--

Responses (1)

What are your thoughts?