Member-only story

Creating Abstract Art with StyleGAN2 ADA

How I used Adaptive Discriminator Augmentation and Learning Transfer to generate improved abstract paintings with AI.

Robert A. Gonsalves
Towards Data Science
11 min readJan 1, 2021

Sample Output from StyleGAN2 ADA, Image by Author

Back in August 2020, I created a project called MachineRay that uses Nvidia’s StyleGAN2 to create new abstract artwork based on early 20th century paintings that are in the public domain. Since then, Nvidia has released a new version of their AI model, StyleGAN2 ADA, that is designed to yield better results when generating images from a limited dataset [1]. (I’m not sure why they didn't call it StyleGAN3, but I’ll refer to the new model as SG2A to save a few characters). In this article, I’ll show you how I used SG2A to create better looking abstract paintings.

MachineRay 2

Overview

Similar to the original Machine Ray, I am using abstract paintings that are in the public domain as my source images to train the SG2A system. I then change the aspect ratio as part of the post-processing. This diagram shows the flow through the system.

MachineRay 2 Flow Diagram, Image by Author

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Written by Robert A. Gonsalves

Robert A. Gonsalves is an artist, inventor, and engineer who writes about the creative uses of AI. Ask questions https://chat.openai.com/g/g-b1kqByRsT-robgonbot

Responses (4)

What are your thoughts?