A Neural Network That Misunderstood Everything I Ever Showed It.

It should come as no surprise when a learning machine trained to recognize mundane objects misidentifies the starship Enterprise. After all it never learned about science fiction, or even basic astronomy. You see, recognition is a matter of statistical inference and algebra. There's no ghost in the machine directing the AI, just multiplication and addition.

  Ville-Matias Heikkilä  / A neural network tries to identify objects in ST:TNG

Ville-Matias Heikkilä / A neural network tries to identify objects in ST:TNG

The computer has hilariously misidentified something obvious—but, it's also obvious that the Enterprise-D is indeed something like a combination of CD Player, Odometer, and Digital Clock. In fact those objects were derived from the same visual goals which inform the iconic starship design. The computer was merely wrong but together we arrived at poetry. 

The Universal Visual Language

 Accompanied only by the short caption “this image was created by a computer on its own”, the picture has been going viral on Twitter and Tumblr since it was leaked roughly a week ago (June 11, 2015)

Accompanied only by the short caption “this image was created by a computer on its own”, the picture has been going viral on Twitter and Tumblr since it was leaked roughly a week ago (June 11, 2015)

This was my introduction to a gentle story about AI with no hint of robot uprising or exploitation of human desire. In 2015 a team at Google led by Alexander Mordvintsev released the first Deep Dream machine hallucinations to an amazed internet. These were instantly recognized as psychotropic fragments of a universal visual language. We’d seen these pictures before - in our dreams, our trips and perhaps even our many UFO encounters. Enthusiastic communities formed around the images; what they were, and how you too could make them. Enthusiasts like me found themselves immersed in a world of new ideas ranging from mundane (though nontrivial) computer setup issues to watching YouTube videos on linear algebra.

  Audun M. Øygard  / GoogLeNet Class 777: Saxophone

Audun M. Øygard / GoogLeNet Class 777: Saxophone

It turns out that showing the learning machine thousands of saxophones is enough to get it into the habit of finding them. Peeking inside, it's apparent that details in the environment contribute to the classification of a subject. The learning machine attaches vague hands and even a musician wearing a purple-y suit to its representation of Saxophone. Why purple? Stage lighting? Or just because... musicians? Most of the examples it learned from must have also included this subject matter.

  Audun M. Øygard  / GoogLeNet Class 161: Basset Hound

Audun M. Øygard / GoogLeNet Class 161: Basset Hound

So here’s one surprise: neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too 
— Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (June 17, 2015). "Inceptionism: Going Deeper into Neural Networks"

Finding Neural Art

My early motion graphics experiments learned to pull archaic forms out of video. There was a feeling if you could only get closer you would see they were real places. Several hundred hours later though, something was missing. Deep Dream provides easy novelty but also a sameness that discourages empathy. Without a personal relationship to the hallucinations it's just another video effect. The algorithm is predictable, so given the same inputs it will paint the same dream every time.

I could not change the machine, but instead I showed it something unpredictable—the World. My video installations show an unpredictable world to a gregarious learning machine through a live camera. My software runs the Deep Dream algorithm continuously on the camera feed. Any changes to that view encourage the neural network to form new interpretations, which are continually being redrawn. 

The display updates at 30 frames a second and new hallucinations resolve every second or so. To make the experience fluid, the learning machine sees only motion. Until then it dreams about the last thing it dreamed about. Human participants spontaneously invent shadow puppet hide and seek games. Some go further. It's a simple game loop; hurry up and wait, hurry up and wait. 

Learning Machines teach us something amazing—it doesn't take much to invent the world. Any technology capable of recognizing images must also invent them. Wherever we find aliens they'll be artists too.