Collaborative Drawing is an ongoing experimentation in drawing jointly with an artificial intelligence.
Collab_Draw is a custom application that allows the user to draw collaboratively with an AI. Using default parameters, the user and machine draw strokes in turn. It is out of the box compatible with a variety of tensorflow-trained neural networks, and can be easily modified to suit other trained networks.
Exchange parameters allow the user to modify how the computer and human artists interact such as in terms of:
- the rate of exchange (number of strokes from human vs. computer)
- the memory of the computer (how many prior strokes the computer considers when deciding the next stroke)
- how the computer uses the user input (decode, encode-decoder, random generation)
Performance parameters modify how the UI of the application. Through experimentation, I have found that the specific UI has a profound impact on the user’s experience of drawing together with a program:
- computer stroke display (the computer can simulate a slow stroke movement like a human hand or display its output all at once or withhold its output until the end of the session)
- undo option
- user color choice option
The final parameter is the specific trained network model, as well as model parameters such as temperature (how variant its decoder is). For my experiments, I almost exclusively use Variational Autoencoders. Various trained models may result in a program that aims to form complete shapes, to draw distinct objects or to produce abstract forms.
The application is implemented using Kivy in Python.
For this initial experiment, I produced a series of drawings titled “10,001 Fireworks” in an exchange between myself and an AI trained on inputs from 10,000 other people. I was interested in the aesthetics of these explosions, which are simultaneous man-made and uncontrollable in nature.
This AI was trained using custom vectorized data (see my blog post on that here) of hand-drawn images. In summary, I created the dataset using scraped hand-drawn images, that I vectorized using computer vision processes, then split into a 10K, 2.5K, 2.5K training,, validation and test datasets. The vectorization process consisted of edge detection, centerline tracing, and line simplification (using the Ramer-Douglas-Peucker algorithm). The AI was trained to 50K steps (using Adam), which I found was sufficient to produce convincing free-form strokes but not so trained that it would rigidly produce recognizable objects.
Interestingly, the resulting neural net was capable of independently producing a wide range of pen stroke styles:
This diverse array of stroke style was an unexpected is a factor of the vectorization process that I am still investigating how to recreate with more accuracy. The program also has the tendency to form flowing curved/straight lines or closed from shapes.
Layering many of these outputs allowed me to create this final image:
I experiment with a machinic cultural exchange between an AI trained on 100K random objects drawn by Chinese artists and an AI trained on 100K random objects drawn by US artists from the Google Quickdraw dataset.
I held an interactive demo with the tool on residency at The Laboratory.