GANs and Inefficient Mappings

How GANs tie themselves in knots and why that impairs both training and quality

Image source: Wikimedia Commons

A warning to mobile users: this article has some chunky gifs in it.

Figure 1: Abstract art produced by CAN (source: arXiv:1706.07068v1)
Figure 2: (top) output from a GAN trained on the MNIST dataset. The output changes unintuitively as (bottom) the levers are adjusted, one at a time, while the rest are held still. Only two of the levers are adjusted, and the 12 other (static) levers are not shown.

Problem One: Spiral

Figure 3: (left) the latent space and (right) sample space of a simple function that maps uniform noise in [[-1, 1], [-1, 1]] to a spiral. Hue and value are used to demonstrate which region in the latent space maps to which region in the sample space
Figure 4: A linear interpolation between the latent space and sample space of the function described in figure 3

Results:

Figure 5: (left) the latent space and (right) output distribution of the GAN
Figure 6: A linear interpolation between the latent space and the GAN output
Figure 7: the spiral-generating GAN’s output during training (coloured) and samples from the target function (grey).

Problem Two: Eight Gaussians

Figure 8: the “eight gaussians” problem, with the latent space on the left and the sample space on the right.
Figure 9: A linear interpolation between the latent space and sample space of the eight gaussians problem

Results:

Figure 10: (left) the latent space and (right) output distribution of the GAN trained on the eight gaussians problem
Figure 11: A linear interpolation between the latent space and the GAN output

Problem Three: One Gaussian

Figure 12: the “one gaussian” problem, with the latent space on the left and the sample space on the right.
Figure 13: A linear interpolation between the latent space and sample space of the one gaussian problem

Results:

Figure 14: (left) the latent space and (right) output distribution of the GAN trained on the one gaussian problem
Figure 15: A linear interpolation between the latent space and the GAN output

Closing Thoughts

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Conor Lazarou

Written by

Data scientist, generative art enthusiast, writer

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade