The first widely known attempt to use AI to make art was Google's DeepDream. DeepDream is an algorithm that was originally intended to be used as a face classifier - detecting faces in images. 

Alexander Mordvintsev realized that it can also be run in reverse - putting face-like features on any image. This created dream-like highly psychedelic images. 

DeepDream creates psychedelic images

 

These days, when people talk about AI generated art they mean one of two distinct things: Neural Style Transfer, or Generative Adversarial Networks.

Neural Style Transfer is a name for a family of algorithms that are used to apply the style of one or more images on an input image. The operator of the algorithm has to choose an input image (i.e, a picture of the Eiffel tower), and a style image (i.e The Starry Night by Vincent van Gogh), and the output is the first image, in the "style" of the second.

 

Both DeepDream and methods for Neural Style Transfer were amazing developments in Artificial Intelligence. But their output is not truly AI generated art, as the user is required to choose the input image and the style to apply. This is why they have been criticized to be, essentially, a fancy Instagram filter. In fact, this is very similar to how Instagram filters work.

The last kind, art based on Generative Adversarial Networks (GANs), is the most similar thing we have to a human artist in the Artificial Intelligence world. 

The final outcome is an algorithm that can generate new images from scratch that will look like "real art". In practice, training these generators requires the training of two separate Neural Networks - The "Critic" and the "Generator". 

The role of the Critic is given an image, to decide whether it was made by a person or not. The critic has access to a vast collection of human art, and given a new image decides whether it shares some of their features. 

The generator gets a random seed as an input and generates an image. Its goal is to convince the Critic that the output image is real. The generator never sees any of the images in the dataset. It learns how to paint solely via feedback from the Critic. 

The two networks are then trained together, with the Critic trying to get better at detecting "fake" images, and the generator trying to get better in fooling the Critic. 

When the process is concluded the Critic is discarded and the Generator is used to generate new images. 

Traditionally, GANs tended to generate images that are very similar to an existing style. The famous Edmond de Belamy portrait, that sold for $432,500 is based on this technology. 

We use a variation of Generative Adversarial Networks. Our Generator learns separately about style and content. This allows it to interpolate between styles and mix style and content in novel ways.