It should come as no surprise when a learning machine trained to recognize mundane objects misidentifies the starship Enterprise. After all it never learned about science fiction, or even basic astronomy. You see, recognition is a matter of statistical inference and algebra. There's no ghost in the machine directing the AI, just multiplication and addition.
Observations of the visual cortex show similar rhythms of neural activity. A artificial neuron is a cartoon version of a biological neuron—cartoonish though only because a living nerve cell must also attend to the business of living. Neuroscience is surely still in its infancy but no one expects to discover Mind in the brain. Instead, it appears the kind of intelligence that tells stories (including self awareness) is found in the environment.
In this case the computer has hilariously misidentified something obvious—but, it's also obvious that the Enterprise-D is indeed something like a combination of CD Player, Odometer, and Digital Clock. In fact those objects were derived from the same aesthetic which informs the iconic starship design. The computer was merely wrong but together we arrived at poetry.
The Universal Visual Language
In 2015 a team at Google led by Alexander Mordvintsev released the first Deep Dream machine hallucinations to an amazed internet. These were instantly recognized as psychotropic fragments of a universal visual language. Enthusiastic communities formed around the images; what they were, and how you too could make them. This was my introduction to a gentler story about AI than the standard-issue Frankenstein myth.
Showing the learning machine thousands of saxophones is enough to get it into the habit of finding them. Peeking inside, it's apparent that details in the environment contribute to the classification of a subject. The learning machine attaches vague hands and even a musician wearing a purple-y suit to its understanding of Saxophone. Why purple? Stage lighting? Or just because... musicians? Most of the examples it learned from must have also included this subject matter.
Finding Neural Art
My early motion graphics experiments learned to pull archaic forms out of video. There was a feeling if you could only get closer you would see they were real places. Several hundred hours later though, something was missing. Deep Dream provides easy novelty but also a sameness that discourages empathy. Without a personal relationship to the hallucinations it's just another video effect. The algorithm is predictable, so given the same inputs it will paint the same dream every time.
I could not change the machine, but instead I showed it something unpredictable—the World. My video installations show an unpredictable world to a gregarious learning machine through a live camera. My software runs the Deep Dream algorithm continuously on the camera feed. Any changes to that view encourage the neural network to form new interpretations, which are continually being redrawn.
The display updates at 30 frames a second and new hallucinations resolve every second or so. To make the experience fluid, the learning machine sees only motion. Until then it dreams about the last thing it dreamed about. Human participants spontaneously invent shadow puppet hide and seek games. Some go further. It's a simple game loop; hurry up and wait, hurry up and wait.
Learning Machines teach us something amazing—it doesn't take much to invent the world. Any technology capable of recognizing images must also invent them. Wherever we find aliens they'll be artists too.