A little robot uprising goes a long way.

Ville-Matias Heikkilä / A neural network tries to identify objects in ST:TNG

Ville-Matias Heikkilä / A neural network tries to identify objects in ST:TNG

It should come as no surprise when a learning machine misidentifies the starship Enterprise, having never learned about Star Trek in the first place. But, the Enterprise-D is indeed something like a combination of CD Player, Odometer, and Digital Clock. The computer is merely wrong but together we arrived at poetry. 

Accompanied only by the short caption “this image was created by a computer on its own”, the picture has been going viral on Twitter and Tumblr since it was leaked roughly a week ago (June 11, 2015)

Accompanied only by the short caption “this image was created by a computer on its own”, the picture has been going viral on Twitter and Tumblr since it was leaked roughly a week ago (June 11, 2015)

In 2015 a team at Google led by Alexander Mordvintsev released the first Deep Dream machine hallucinations to an amazed internet. These were instantly recognized as psychotropic fragments of a universal language. Enthusiastic communities formed around the images; what they were, and how you too could make them. This was my introduction to a gentler story about artificial intelligence.

Audun M. Øygard / GoogLeNet Class 777: Saxophone

Audun M. Øygard / GoogLeNet Class 777: Saxophone

Showing the learning machine thousands of saxophones is enough to get it into the habit of finding them. Peeking inside, it's apparent that details in the environment contribute to the classification of a subject. The learning machine attaches vague hands and even a musician wearing a purple-y suit to its understanding of Saxophone. Why purple? Stage lighting? Or just because... musicians? Most of the examples it learned from must have also included this subject matter.

Audun M. Øygard / GoogLeNet Class 114: Slug

Audun M. Øygard / GoogLeNet Class 114: Slug

So here’s one surprise: neural networks that were trained to discriminate between different kinds of images have quite a bit of the information needed to generate images too 
— Mordvintsev, Alexander; Olah, Christopher; Tyka, Mike (June 17, 2015). "Inceptionism: Going Deeper into Neural Networks"

My early motion graphics experiments learned to pull archaic forms out of video. There was a feeling if you could only get closer you would see they were real places. Several hundred hours later though, something was missing. Deep Dream provides easy novelty but also a sameness that discourages empathy. Without a personal relationship to the hallucinations it's just another video effect. The algorithm is predictable, so given the same inputs it will paint the same dream every time.

I could not change the machine, but instead I showed it something unpredictable—the World. My video installations show an unpredictable world to a gregarious learning machine through a live camera. My software runs the Deep Dream algorithm continuously on the camera feed. Any changes to that view encourage the neural network to form new interpretations, which are continually being redrawn. 

The display updates at 30 frames a second and new hallucinations resolve every second or so. To make the experience fluid, the learning machine sees only motion. Until then it dreams about the last thing it dreamed about. Human participants spontaneously invent shadow puppet hide and seek games. Some go further. It's a simple game loop; hurry up and wait, hurry up and wait. 

Learning Machines teach us something amazing—it doesn't take much to invent the world. Any technology capable of recognizing images must also invent them. My science fiction cave paintings are sometimes a mirror and sometimes a window. I never imagined I'd use stagecraft and algebra to make pictures of Minds. Now is the best time to understand and embrace the artificial intelligence technology which silently reshapes the world. Wherever we find aliens they'll be artists too.