Training my AI

To train a custom an AI that draws like me I used CycleGAN. I trained the GAN with 28 sketches of mine over top of flemish plein-air and early impressionist paintings that were dissected into unique 256 crops. These 28 paintings were obtained through the MET Open Access API. I drew over top of them using the Concepts app on iPad. Here are some examples of A and B inputs:

“The Beeches”

The-Beeches.jpg

The-Beeches.jpg

“A Bird's Eye View”

A-Bird's-Eye-View.jpg

A-Bird's-Eye-View.jpg

“Boating”

Boating.jpg

Boating.jpg

“The Road from Versailles to Louveciennes”

The-Road-from-Versailles-to-Louveciennes.jpg

The-Road-from-Versailles-to-Louveciennes.jpg

I trained these 256 images for 400 epochs with 256 inner loops each.

Using my Cole drawing AI

Sample of various inputs (photographs and stable diffusion outputs) run through my GAN drawing AI:

cole-hand-ai-v1-contact-sheet-1.jpg

There a 6 samples here arranged in a grid where each sample is: [ # input | # AI drawing prediction | # painting prediction ]

So the entire grid is:

| 1 input | 1 AI drawing | 1 AI painting | 2 input | 2 AI drawing | 2 AI painting | | --- | --- | | 3 input | 3 AI drawing | 3 AI painting | 4 input | 4 AI drawing | 4 AI painting | | 5 input | 5 AI drawing | 5 AI painting | 6 input | 6 AI drawing | 6 AI painting |

After my AI creates the Cole drawing I run the output through a few cleaning processing filters in Photoshop and Illustrator to generate cleaned up SVG: