Making Art with AI

Human and Machine Collaboration for Unexplored Territory

Jonathan Follett
Towards Data Science

--

By Dirk Knemeyer and Jonathan Follett

Art and technology: It is a codependent relationship, as artists — people who strive to explore and express at the vanguard of humanity — seek out technologies that enable new forms of expression, even while serving as comments on the state of culture and human progress. When we see the cave paintings in Lascaux — one of the earliest examples of visual art remaining on Earth — we are looking at art made possible by technology. Humanity’s ingenuity in persistently capturing rudimentary visual images was a remarkable advance even as, today, it seems mundane and fundamental to our existence. In the intervening thousands of years, art and technology have driven one another, now taking the form of things such as artificial intelligence, the Internet of Things, and 3D printing.

So, how are artists using the latest technologies in their work? Much of the mainstream narrative centers around how AI creates recognizable, traditional forms of fine art. “The Next Rembrandt”, a 2016 project funded by a variety of corporations led by financial services company ING, used AI to generate a new Rembrandt-like painting that was 3D printed and displayed. In 2018, the Obvious artist collective created “Edmond de Belamy”, a portrait that evokes what might be called an impressionistic Rembrandt, and which became the first AI painting sold at auction by Christie’s.

Figure 01: Memories of Passersby I, Mario Klingemann, 2018
[Photo courtesy of Onkaos]

Mario Klingemann, a German artist who works at the leading edge of technology, is best-known for “Memories of Passersby I” — an autonomous machine that uses a system of neural networks to generate a never-ending, never-repeating stream of portraits of non-existing people — which was sold at Sotheby’s Contemporary Art Day Auction. Klingemann’s storied background also includes a stint as artist in residence at the Google Arts and Culture Lab. And, his works have appeared at the Ars Electronica Festival and the Museum of Modern Art New York, among many others.

Klingemann has focused on technology throughout his career. “I’m trying to use artificial intelligence to allow me to take it beyond what I could just do myself,” he says. “The very, very long term goal for me is to try to create these entirely autonomous machines that you might be able to call ‘artists’. … I am far away from that goal. But sometimes it’s good to have some goal on the horizon and try to get there,” says Klingemann.

“So, right now you could say I’m creating my own assistants, which have some capabilities that I have not. But, at the same time, it’s still me that does the art, and the AI or the deep learning, machine learning, is just some very powerful tool that is able to surprise me.”

Klingemann describes the evolution of his technology-infused art. “In the past it’s been mostly visual stuff — images, videos, installations that have some kind of visual component. So, I’m very interested in image making. But, I don’t feel like that’s the only place I will always be,” he says. Klingemann focuses on how future machines and tools will relate to humans. “I want to make things that keep us interested, that entertain us, that will show us something that well we didn’t think of ourselves,” he explains. “Currently I’m drifting off more into text generations. I’m not so sure if you can call it storytelling. … Because, for me, that’s a level deeper. In the end, a story is what everything boils down to.”

It is interesting that Klingemann, acclaimed for his AI-generated visual art, is shifting his focus to text. He’s drawn to the future, moving to the next new thing when the future finally becomes the present. As AI visual creation tools become more mainstream, Klingemann sees AI and text as the next frontier. “As you probably know, now we have all these interesting models that are able to produce something that almost looks like a text that makes sense. But, of course, it’s still far away from writing a novel or even a short story that is actually where we say …‘This is a story I would want to tell somebody else.’ … I believe that this will improve very quickly and we are already reaching this uncanny valley of text.

“[With images] we have reached a point where everything is so realistic … that you don’t even know that you’re looking at a machine-generated image. And the same will happen with texts. In some cases, cherry-picked examples, we have already reached that point. Of course, this will do something strange to ourselves,” says Klingemann. “We are still feeling so sure that we can totally say what’s real and what’s not, and we will lose that ability very soon.”

The Question of Creativity and Credit

At times, people question the role of the human in the creation of AI-augmented art. One typical reaction Klingemann encounters is: “‘Oh, but it’s the computer doing this.’ or ‘Well, all you do is write or program a little bit, but the machine does all the rest,’” he says. A deeply embedded belief Klingemann finds is this: “You still have to do something with your hands to [make it] true art, which of course, I find pretty ridiculous. If you look at particular contemporary art — let’s say probably 80% of the artists that you know — the big names … none of them do their own art anymore. They have all their assistants and studios … that make [things] for them. So, they write a concept on the paper and then just come for a visit [to the studio] and see how it’s coming along. Or, you take something like a movie, where a director directs something.”

Interesting questions of definition dance around this narrative. In different media, the expectation of what a creator does is variable. On one hand, a movie director is joined by hundreds of other collaborators in ways large and small, yet it is the director whose “film” the final product is anointed. On the other hand, while the assistant-driven model is well-known at the apex of the fine art world, there is a significant difference in value and prestige for something that comes directly and primarily from the master as opposed to a piece created under her direction. Why is there such a chasm between disciplines?

The introduction of AI, which requires human head and hands at myriad points in the process, introduces a new vector to the question of creativity and credit. It might be a good reason to more broadly look at these questions across disciplines, and reconsider how we perceive collaboration and the work that spawned from it.

Deep Learning: A Doorway to Multidimensional Thinking

Klingemann describes his latest thinking about deep learning as a technology: “One thing I have learned in the past four years or five years — where I would say deep learning has really exploded — is [around] this question of multidimensionality. This is for me currently, the holy grail, this idea that, if the machine learns something, whatever you try to teach it gets projected into this multidimensional space. And, from this multidimensional space, you extract information out again by setting certain boundaries or having the machine identify certain clusters.”

“But, the fascinating thing is that we as humans have this desire to get clear answers, or to say black and white or true and false,” says Klingemann. “But, if you look at the raw data, there’s just this continuous space. And, only by asking it, I want to know ‘Is this a dog or is this a cat’, then you have at some point to define a boundary or a threshold, which is almost arbitrary. I mean, the data defines it somehow, but there’s never a clear edge.”

“And so, I like this idea of, if we can take this multidimensional thinking at least into our consideration, and always thinking,‘Yes’. So, hard categories are actually a very human invention, or, it’s our desire to be certain, but there’s never certainty. Of course, the world we currently live in is all about battle for certainty and who’s right and who’s wrong. So, I would rather say, ‘It’s all latent space and so maybe we can never be 100% sure. And maybe something’s also a little bit of something else’. Putting labels on things is a human thing. It’s not a machine thing.” The idea that multidimensional thinking requires moving away from hard and specific human definitions, from the crisp boxes we are drawn to put things into, is an interesting notion. In learning more about humanity and theories of social evolution and development, we’re struck by the reality that we exist in a continuum and are rarely just one thing, or in just one place in our development and being. It’s a human tendency and preference to reduce complexity to something simple, something black-and-white. Machines, processing power notwithstanding, don’t have those limitations. They can consistently and at scale exhibit multidimensional thinking over all the data they survey. Beyond our thinking about how best to leverage and extend the tools thanks to these abilities, we should learn from their facility in these ways and attempt to be less black-and-white in our own framing of other people and the world.

The Generative Adversarial Network as Creative Tool

A generative adversarial network (GAN) is made up of two neural networks that compete with each other. The generative network creates output, based on a training dataset it is fed, while the discriminative network attempts to distinguish whether or not that output from the first is synthetic or real. For the GANs that Klingemann uses in his work, “They are actually trying to learn, in the case of portraits, how a portrait is made. So, it learns that eyes come in a certain statistical distribution and skin color, hair, that there are certain proportions about faces,” he says.

Klingemann describes how the GAN is an important tool in his creative process. “Once you have trained the GAN, some people are then just using it to create infinite amounts of images,” he explains. “But, for me, usually the process only starts then. One GAN becomes almost like a Lego brick or some kind of little engine that I then combine with other bricks and build all these pipelines where complexity quickly starts building up. So, what one model creates, the other one deforms. And then, I really like playing with feedback loops. So, at some point, I feed the output of the sixth model back into the input of the first and then you get to these systems, which start behaving, hopefully, in an interesting way. Often they just break. So, with any kind of these loop systems, then it becomes about finding the sweet spots in which you get kind of a beautiful balance between almost chaos and something that died and is producing a black screen. And that process is very much like, you have to use a lot of gut feeling there.”

“Because these systems are so complex, you cannot really say, ‘Okay, I change this number and that thing will work.’” says Klingemann. “It’s really turning knobs here and turning knobs there — maybe like with analog synthesizers or cooking. You learn by experience that it probably goes in this direction but then there could always be something really strange happening around the corner. And of course that’s what you’re hoping for … that you end up in a space that you didn’t even know was there.”

The human element in setting up a GAN properly is essential. It is a variant of the old axiom “good data in, good data out”. As much as we call machines “smart”, or “artificial intelligence”, they are just a more complicated tool for a human to use. We may not think of a brush, paints, and canvas as technology but they are on the same continuum as the palette of things Klingemann used in making his art.

“Memories of Passersby I”

With his influential piece, “Memories of Passersby I”, Klingemann wanted “… to create something that keeps on creating, in this case portraits, and never repeats and stays surprising. This piece consists of a computer that houses several GANs that work in a feedback loop and continuously create portraits. The idea is that when you are actually there to see it, you’re facing this infinite stream of very slowly changing portraits that are actually being made in the moment you watch them. So, also, if you’re alone in the room and you see something, you will be the only person who has ever seen this.”

Figure 02: Memories of Passersby I, Mario Klingemann, 2018
[Photo courtesy of Onkaos]

“The other idea is that it’s supposed to be a long-term piece. So, it’s not just for a day or two but it’s supposed to work for years. The idea is that it stays surprising in some sense, even though it’s very limited in its possibility space — it will only create portraits. When we look at it and see it for awhile, we get a certain idea like, ‘Okay, what will happen next?’ And I like the idea that you come back in a day or two and you will actually see some new portrait that doesn’t fit into how you estimated this system would evolve.”

“One of the pieces is at one of my collector’s homes … She’s the one who commissioned the first one. And, she’s still happy with it, because she comes in every morning and sees it and it’s a new face. And, it still hasn’t grown old on her,” says Klingemann. “She’s still being surprised, which makes me happy … When you build a complicated system like this … in the end I’m also limited in my abilities to foresee how it will develop. So, that’s the tricky part, that I’ve tried to build something that keeps on changing, but, of course, shouldn’t break or shouldn’t drift off into an area where it’s totally just noise.”

“I always say, ‘This piece is like a child that I tried to educate and give some values. But at some point I have to cut the cord and send it out in the world and I hope it will keep on doing what I told it to.’ So, that’s unlike a painting where, when it’s done it might age a little bit, but it will look the same in 20 or 50 years. In this case, it is a little bit alive, which makes it always, also like I hope it still lives.”

Discovering the New and Unexplored

Klingemann is using GANs and related technologies for his experiments and explorations in text generation. “I made these fake interviews with an artist … the whole interview is generated by the machine. And, it’s interesting because then you start feeding the output of the model back into its input and it keeps on doing these things and then the question is, ‘Okay, I’ve tried to find ways to shape it further, how to take this and maybe filter it, adjust what it says?’ So, that’s currently ongoing experimentation, again to see because it is still a little bit uncontrollable. So, you can direct it in a certain way but then, well if you don’t like it, you just have to try again, and again until you get something. And, of course then I’m still the one looking at the text and deciding, ‘Does this make sense or is this funny, is this interesting?’ So, the next step is, of course, for another algorithm to determine if whatever it has given me fits into whatever current theme I’m working on. But, of course this is one building block and further explorations will go into when I have the story can I turn this into a movie? Can I turn it into a comic book? Can I make music from it?”

“That’s what it always boils down to me. That, you have these latent spaces, these abstract multidimensional spaces, which, in theory, can become these universal translation spaces,” says Klingemann. “So, in one side you put text in and you get images out. Or, you feed an image in and then you translate it into text. In the end, all information becomes one in these spaces, and the hope is to extract something out of it, which is enlightening to us or makes us happy or makes us wonder. That’s an ongoing search, which probably will never end. The search is so enjoyable for me so … I never want that search to end.”

Perhaps, but of course everything does eventually end. And, as an artist who thrives in the new and unexplored, Klingemann is sensitive to how the landscape is shifting: “Well, what I can already see is that unlike two, three years ago, now we have these practical tools become part of the tool sets of people who are not programmers. So, the ability to play with latent spaces gets into the hands of everybody. So, and, whilst in some sense, as I mentioned before, I’m a bit sad that I now not the only one who’s able to do it. It is of course a good thing because other people who maybe have a less technical background have very different ways of creating art. … It’s great that it’s out, just like the camera made it into the hands of the whole world. It’s always had to see how it will change our world, but it definitely will.”

Creative Next is a podcast exploring the impact of AI-driven automation on the lives of creative workers, people like writers, researchers, artists, designers, engineers, and entrepreneurs. This article accompanies Season 3, Episode 9 — Art and Technology.

--

--