Search the Mindcreators site: 

2.4 Connectionism

1. Not What It Seems.

Connectionism is the theory that cognitive processes are those processes taking place in networks of nerve cells. Unlike in functionalism, connectionists believe that the structure of the brain is critically important in how the mind works. Therefore, you would then probably assume that they would for the most part use the brain and real nerve cells as their models. If so, then you would be wrong. Until recently those practicing connectionism have built "neural nets" that have little or no resemblance to how the brain really works. Not because they did not know enough of how real neurons and neural networks work, but because by and large the people who converted to this camp of philosophy did so from functionalism and they have been unable or unwilling to shed the baggage from their previous beliefs. What has been termed "neural networks" in connectionism is much more like mathematical graph and network theory in computer science than how the brain and neurons really work.

2. Backpropagation Network.

Typical Connectionist Neural Net
Figure 1. This is the typical layout for a backpropagation neural network.

To try and help you understand the differences I am talking about here I will explain a little about how one of the most popular forms of "neural nets" works. This is the feedforward backpropagation network. These types of networks typically have three layers, the input layer, hidden layer, and output layer. They are formed like a matrix and each "neuron" in one layer has connections to all "neurons" in the next layer. Each connection is given a weight that is typically between one and negative one. The values that are supplied to the input neurons are usually also between one and negative one. The "neurons" then use a neuron like formula to determine if they will fire. They sum up the values of all inputs and if it is above a specific threshold value then they will fire. If a neuron fires then it sends an input value equivalent to the weight of its connection with each neuron into the next layer. For example, in figure 1 if neuron N1 fires, and weight W14 is 0.5, then neuron N4 will get an input of 0.5 from neuron 1. (This is using a simple step function for the threshold. There are also more complicated functions like the sigmoid and ramp.) Then the neurons in the hidden layer add all of their inputs and use the threshold function to see if they fire and feed into the output layer. Finally, the output layer does the same thing and specifies a binary value at the end taking all outputs together. However, before a backpropagation network can be used it must first be trained. To do this you must first decide what data is relevant to be included in the inputs of the network and what data you want for the output. This can be very important because if you provide a lot of unnecessary things it will seriously degrade the performance of your network, but if you do not provide enough, or the right kind of data then the network will probably not work at all. After you have decided what data to use then you must come up with a way to skew the data so that it will fit between the values of negative one and one and still retain its meaning relative to the other data. Believe it or not, this usually takes about 90% of the time involved in building one of these types of neural networks. Now you have a set of usable data for the input and output that you can use in training. You take half of this data to use in training, and the other half will be used to test the resulting system. For each input variable you have, you have one input neuron. And for each output variable, you have one output neuron. You then apply each input vector to the input neurons and allow that to percolate through the network until you get an output vector. Then you compare that output vector to the actual output vector from your training data and generate an error vector. Then a learning rule is applied to modify the connection weights between the hidden layer and the output layer so that the output will be closer to the desired value. Two popular rules are the Hebb rule and the Delta rule. The Hebb rule states that the new connection weight is the product of a constant called the learning rate and the activations of the two neurons in question. Once all the connection weights between the hidden layer and the output layer have been modified then the connection weights between the input layer and the hidden layer are modified in a similar manner. The whole process is repeated over and over again until you finally get outputs for a majority of your inputs that are within a certain error range. At this point training is completed and you need to test the system on the other half of the data. Now you just apply the input data and compare the outputs, but you do not use the error info to modify the connection weights this time. If the neural network is producing outputs that match closely the actual data then you can feel reasonably confident about using it against new data. If it is way off then you have to go back and rethink the whole proposition and figure out what you left out or did wrong.

3. Differences From Real Neural Networks.

1. Real brains are always in motion. One of the major features of this type of connectionist neural net is that it settles into a steady state value where you can read the output values. There is a biological term for any real brain that settles into a steady state value such as this. It is called death.
2. In the real world you do not have a programmer deciding which data is important and munging it for you so it is in an acceptable range. The environment is in no way pre-labeled or presorted. The neural network must take the ambiguous inputs from the world and produce real and continuing behavior that leads to the survival of the organism.
3. There is no such thing as an error value in real brains. Real neural systems do not have the "right" outputs imposed on them from outside. They must use internal value systems generated from millions of years of evolution to help determine what outputs are more likely to lead to survival. For instance, A value system in the insect simulator is the hunger controller. When the insect gets hungry it changes the behavior of the insect to try to find and eat food. This behavior is not imposed by some external system or programmer, but is inherent in the network itself.
4. Most of the rules used to change the connections strengths have nothing to do with any known biology of real neurons. The exception here is Hebb's rule. It is based on biological evidence. But even there it is not entirely complete or accurate.
5. Real neural networks are not laid out in nice, neat, one way grid networks in which each element connects to all elements of the next. Real brains are very messy, and very complex. They have numerous loops and feedback connections. Their are millions and millions of neurons that connect in almost uncountable patterns. In fact, it is these feedback loops and connections which allow the brain to perform many of its functions. Without those, there is no way to perform a number of operations.

4. Conclusion.

As I said above, the majority of the people doing research in this field are making a lot of the same old mistakes they made when they followed functionalism. They finally acknowledge that the brain is important to how the mind works, and then turn right around and ignore any real details of how the brain works and is interconnected. They use an instructional system that has been explicitly designed to attempt to train the networks, when it is obvious that the real brain does not have the benefit of programmers to make its input and output data pretty. However, not everyone is falling into these traps. There is a growing number of people who are trying to use the information gleaned from neurobiologists to build systems that are based more closely on what really happens in the brain. However, because the field of connectionism is so thoroughly dominated by people using these unrealistic "neural nets" the people trying to build systems based on actual knowledge of the brain rarely identify themselves as being connectionists.

<< Previous Contents Home Next >>

MindCreators.Com is edited and maintained by David Cofer. If you have any questions, comments, or just want to discuss the contents of this website, then email me at:

Copyright © 2002 by David Cofer. All rights reserved.