Member-only story
Creating Abstract Art with StyleGAN2 ADA
How I used Adaptive Discriminator Augmentation and Learning Transfer to generate improved abstract paintings with AI.
Back in August 2020, I created a project called MachineRay that uses Nvidia’s StyleGAN2 to create new abstract artwork based on early 20th century paintings that are in the public domain. Since then, Nvidia has released a new version of their AI model, StyleGAN2 ADA, that is designed to yield better results when generating images from a limited dataset [1]. (I’m not sure why they didn't call it StyleGAN3, but I’ll refer to the new model as SG2A to save a few characters). In this article, I’ll show you how I used SG2A to create better looking abstract paintings.
MachineRay 2
Overview
Similar to the original Machine Ray, I am using abstract paintings that are in the public domain as my source images to train the SG2A system. I then change the aspect ratio as part of the post-processing. This diagram shows the flow through the system.