The first widely known attempt to use AI to make art was Google's DeepDream. DeepDream is an algorithm that was originally intended to be used as a face classifier - detecting faces in images. 

Alexander Mordvintsev realized that it can also be run in reverse, applying face-like features on the input. This created dreamlike, highly psychedelic images. 

DeepDream creates psychedelic images

These days, when people talk about AI generated art they mean one of two distinct things: Neural Style Transfer or Generative Adversarial Networks.

Neural Style Transfer is a name for a family of algorithms that are used to apply the style of one or more existing images to an input image. The operator of the algorithm has to choose an input image (i.e, a picture of the Eiffel tower) and a style image (i.e The Starry Night by Vincent van Gogh) and the output is the first image in the "style" of the second.

Both DeepDream and methods of Neural Style Transfer were amazing developments in artificial intelligence. However, their output was not truly AI generated art as the user is required to choose images that already exist. This is why these algorithms were criticized for essentially being just like a fancy Instagram filter. In fact, this is very similar to how Instagram filters work.

The last kind of AI art is based on Generative Adversarial Networks (GANs), and it is the most similar thing we have to a human artist in the world of artificial intelligence.

The final outcome is an algorithm that can generate new original images from scratch. In practice, training these generators requires the training of two separate neural networks - The "Critic" and the "Generator". 

The Critic is given a vast database of human art in different styles from throughout history and analyzes its features.

The Generator, which has never "seen" art before, gets a random seed as an input and starts generating an image from scratch. The output then goes through the Critic, which based on its knowledge, decides whether or not the image looks like art made by a human.

The two networks are then trained together, with the Critic trying to get better at detecting "fake" images, and the generator trying to get better in fooling the Critic. And so the Generator essentially learns how to make its own original art based solely on feedback from the Critic. 

When the process is concluded the Critic is discarded and the Generator is used to generate new images. 

Traditionally, GANs tend to generate images that are very similar to an existing style. The famous Edmond de Belamy portrait that sold for $432,500 is based on this technology. 

We use a variation of Generative Adversarial Networks. Our Generator learns separately about style and content. This allows it to interpolate between styles and mix style and content in novel ways.