Neural networks are computer models that learn by example, but what is learning?
Neurons are living control panels for biological nervous systems. There are billions of these electrically excitable cells in a human brain. Any neuron may have thousands of inputs. Electrochemical signals flow into the ancient oceans inside each of them. The connective fibers can be meters in length, so neurons are also the largest cells in the body.
Artificial neurons, the kind that work on my computer, are modeled after biological ones. Up to a point anyway. These cartoon neurons mix together multiple inputs (addition) and modulate them (multiplication) to produce an output. Each input has a weight, initially random, which biases the incoming signal. A positive weight encourages the signal. A negative weight inhibits it. The neuron is activated when it produces an output. It fires.
In traditional computing decisions are yes/no, true/false affairs. Neural computation is a bit different. Instead, decisions emerge from a little more of this, a little less of the other thing. Choices are more like habits. Learning then, is the process of adjusting all the biases so a neural network gets into the habit of recognizing new input. Machine learning is the art of getting it to do this on its own.
Neural networks come in different shapes and sizes. Deep Dream Vision Quest uses a popular implementation of Google's Inception architecture (like the movie) called GoogLeNet. It may come as a surprise, but the network doesn't contain images. It contains only habits, which are the learned weight values of neural connections. GoogLeNet was trained on over 1 TB of images, but is itself only 50 MB in size.
Automating Perception by Deep Learning
October 15, 2014 by Evan Shelhamer