Introduction to Neural Networks

To begin with, a neuron is the basic unit of the brain and it helps transmit information from one nerve cell to another.

Main parts:

Synapse: a structure that permits a neuron (or nerve cell) to pass an electrical or chemical signal to another neuron
Axon: slender projection of a nerve cell, or neuron,that typically conducts electrical impulses
Dendrites: extensions of a nerve cell that propagate the electrochemical stimulation received from other neural cells to the cell body

How does it do that?
Neurons are excitable and produce events called action potentials or nerve impulses or spikes. Now, these nerve impulses are basic currency of the brain for several reasons,

 

For the neuron to communicate with each other
for computation to be performed
for information to be transmitted
Neurons begin with giving out an electrical signal which start a chain reaction. Every neuron in path takes up the signal and passes it on to the next one. The Dendrites then pick up the impulse and send the message to the axon, which then delivers it to the next neuron

Once the message hits the target(eg. Muscles), a neurotransmitter is stimulated and causes actions for which the message was sent.

The most intriguing part of the above process is, all of this happens in about 7 milliseconds.

The Artificial Neuron
In the year 1957 with advent of technology Dr. Frank Rosenblatt, an American psychologist worked on the Perceptron, an electronic brain that worked on the biological principles of the actual brain and showed the ability to learn. It was initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory.

If a triangle is held up, the perceptron’s eye picks up the image & conveys it along a random succession of lines to the response units, where the image is registered. It can tell the difference between a cat and a dog, although it wouldn’t be able to tell whether the dog was to the left or right of the cat.

A single-layer perceptron was found to be useful in classifying a continuous-valued set of inputs into one of two classes. The perceptron computes a weighted sum of the inputs, subtracts a threshold, and passes one of two possible values out as the result. Perceptrons, however, are unable to solve problems that are not linearly separable. This limitation of the Perceptron by identified in 1969 by Minsky and Papert.

In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models they called ADALINE and MADALINE. These models were named for their use of Multiple ADAptive LINear Elements. MADALINE was the first neural network to be applied to a real world problem. It is an adaptive filter which eliminates echoes on phone lines. This neural network is still in commercial use. The learning procedure is based on the error signal generated by comparing the network’s response with the optimal (correct) response. If the error (computed as the difference between the summer and the reference switch) is greater than zero, then all gains are modified in the direction that will reduce the error magnitude

Multilayer Perceptrons

The linear separability limitation identified can be overcome by introducing multiple layer of Perceptrons.

The numbers within the neurons represent each neuron’s explicit threshold (which can be factored out so that all neurons have the same threshold, usually 1). The numbers that annotate arrows represent the weight of the inputs. This net assumes that if the threshold is not reached, zero (not -1) is output. Note that the bottom layer of inputs is not always considered a real neural network layer.

References:

  1. http://www.cs.ucc.ie/~adrian/cs5201/NeuralComputingI.htm
  2. https://www.newyorker.com/magazine/1958/12/06/rival-2
  3. http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/neural4.html

 

 

انصراف از نظر