Computer Metaphor

COMPUTER METAPHOR was an artistic and research project into mixed class image generation using AI.

A fundamental aspect of human creativity involves mixing symbols and meanings in new ways, to generate fantasies, metaphors and imagined realities. Most existing artificial intelligence programs facilitate the association of a single word to a single image in a 1:1 relationship. However, I was interested in constructing an AI that is able to draw sporadically, changing its mind partway. The machine trained for this project is able to draw more naturalistically by having the ability to focus on different subjects while still producing a single cohesive image. My research into mixed class generation using various network models is discussed in a more detailed post here. I present the final work as a curated book of fantastical imagery, meant to evoke the imaginative qualities of human and computer consciousness.

I was interested in a computer program that was capable of ambiguous thought.

The most successful network model was a variational autoencoder (VAE) that incorporated the idea that the way we encode drawings in our mind exists in space and in time. This particular neural net consisted of a convolutional neural net (CNN) encoder and a long short term memory (LSTM) decoder. The dual nature of the CNN-LSTM network allows it to identify spatial features within the image (CNN encoder) and relate them to stroke patterns in the drawing sequences (LSTM decoder), which enables the machine to start drawing one object then switch to drawing a second object based on what it has drawn so far. It was able to produce mixed object drawings with cohesive internal structure in terms of complete (closed-form) shapes, symmetry and naturalistic transitions.

In order to create an AI capable of dreaming, I had to make an AI capable of encoding drawings in both space and time.

I demonstrate this AI by having it produce a booklet of fantastical future creatures, switching between latent vectors for common animals and technological components:

SAMPLES

DRAWINGS

The generated images were then drawn on rice paper using a pen plotter robot:

BOOKLET

The final generated images are presented as part of a 25-page booklet printed on rice paper and bound with dyed blue cardboard:


Mixed Class Neural Networks

NEURAL NETWORK MODELS (including code and implementation details) CAN BE FOUND ON THIS PAGE (Scroll down).