Search the Mindcreators site: 

2.3 Functionalism

1. Introduction

My favorite character on Star Trek The Next Generation was Data. The android who was always striving to understand emotions and become human. Data, and virtually all science fiction related to robots, androids, and Artificial Intelligence that has ever been produced, are all based on the theories of functionalism. This theory has dominated the world of AI research and thought for the last fifty years, and still does so till this very day. However, the first cracks have begun to appear as a new generation of researches arise who don't believe the hype of this philosophy, and who are unhappy with the failures and broken promises of functionalism.

2. Evolution of Functionalism

Around the latter half of this century philosophers and psychologists began trying to overcome some of the obvious limitations of behaviorism. By denying the existence of the mind, and of any real internal processes, they were blinding themselves to reality. So at first the new breed of philosophers that were unhappy with the status quo came up with the Type Identity theory that basically states that every mental state is identical to a neurological state. This developed problems though. Every persons brain, while structurally similar, is wired completely differently. How then can they have the exact same neurological state. So this theory was soon replaced by the Token-Token Identity theory which attempted to answer that question. If two people are in different neuropsychological states, then what is it about those different states that make them the same mental state? And they felt they had the answer. They said it was the function of those two states that made them the same. Sounds reasonable at first glance. This then led directly to functionalism, which says that two different brain-state tokens would be tokens of the same type of mental state iff the two brain states had the same causal relations to the input stimulus that the organism receives, to its various other "mental" states, and to its output behavior.  In simple words this is basically saying that two thoughts are the same if all of the inputs, internal and external, lead to the same output. So what functionalism gives you that behaviorism lacked was those other "mental" states. By acknowledging those internal inputs it was trying to put the mind back in the equation. There are a number of problems with functionalism, but one in particular led to the next breakthrough.

Functionalism so defined failed to state in material terms what it is about the different physical states that gives different material phenomena the same causal relations. How does it come about that these quite different physical structures are causally equivalent?
(Searle, The Rediscovery of the Mind)2.3.1

3. Artificial Intelligence

It was at this point that researchers in the newly burgeoning field of Artificial Intelligence stepped in to provide an answer for that question. They said that different material structures can be mentally equivalent if they are different hardware implementations of the same computer program. Or in other words, the mind was the software and the brain was the hardware. Since you can run your Microsoft Word program on countless different computers and have it act exactly the same even on different processors, then the mind program could also be run on any type of hardware. And thus, the brain is in no way important to the process of thought, and all thought is merely the manipulation of symbols. So if the researches could somehow figure out how to download the "mind program" from me then they could upload my program onto your brain and my mind would now be running on your brain. I would be living in your body. To them, the structure of the brain does not matter. The idea that any computer can run the same software is based on what is called a Turing machine. Alan Turing was a brilliant British mathematician who invented an abstract finite-state machine that proved that any program that could be run on one Turing machine could be run on another. If not directly, then by simulating the other machine.  So theoretically you could make a Turing machine out of old soup cans that could run your Word program. It would probably take you a thousand years just to get past the load up procedure, but it would work. This type of machine has a finite, but large, set of states it can be in, and it can only be in any one state at any time and it must transition to a new state using explicitly defined rules. So now they had the answer they wanted. They did not care how the hardware of the brain worked, because all that they needed to understand was how the software of the mind related the inputs and internal states to produce output behavior.

4. The Brain Is Not A Computer!

One of the first ways in which Functionalism breaks down is in the direct comparison of the brain to a computer and a finite state machine. As mentioned before finite state machines have a finite number of unambiguous states and use deterministic rules to transition from one state to the next. Computer systems are heavily engineered with backup systems and error checking to make sure that they eliminate all ambiguity. This ensures that it can in fact go from one designated state to the next state flawlessly every time using the deterministic rules. Computers are binary systems that use two arbitrary voltage levels to store state information. Small deviations that do occur are ignored by agreement and design. However, the brain is a non-deterministic analog system. It has an infinite number of possible states. I recently read about a good example of the difference I am trying to explain here. A graduate student was doing research into genetic algorithms to try and evolve a computer chip that could always tell the difference between two different sound frequencies. Basically he was using evolution to put the pieces of the machine together instead of explicitly designing it himself to see if evolution could do it better. After hundreds of generations he did get a system that worked one hundred percent of the time. The entire thing ran on a digital computer chip. However, when he analyzed what evolution had built, he found that there were pieces of the system that were not even connected to one another, but when those pieces were removed the system no longer worked. Finally he realized that evolution did not care about digital theory. It gave him an analog solution. Even though the chip was designed to be run using digital technology, evolution had used the fact that neighboring transistors affect each other in highly complex and nonlinear ways to solve the problem with only twenty percent of the components that a human designer would have used. These affects are subtle and engineers use a lot of tricks to eliminate them from the computer systems we use everyday. Since no two chips have exactly the same relationship between each of their parts, that solution would not work on any other chip but that one! The brain in this case was analog, not digital. So it could not run the same program to get the same results. In just this way each one of our brains are analog and are wired completely differently. There is no way they could work as finite state machines of any kind.

5. Syntax Vs. Semantics

In functionalist systems symbols are instantiated in a program as states of physical objects. Strings of symbols are used to represent sensory inputs, behaviors, memories, categories and all information that system deals with. Thought is then just the manipulation of these input and internal symbol strings to produce output symbol strings that represent things like words, sentences and actions. These symbol manipulations are purely formal in nature and are carried out without any reference to the meanings of the symbols involved. A set of these rules for symbol manipulation constitutes a syntax. When I write a program on my computer at home I generate the rules in an algorithm, and at the same time I also assign the meaning to the all of the symbols. But who assigns meaning to the symbols in the functionalist systems, and how?? There are infinite possible inputs from the environment and the world does not come pre-labeled with categories. When I see a cobra rearing up in the grass I want to flee for my life, but when a mongoose sees that same cobra it sees lunch. The exact same sensory inputs lead to two very different behaviors based on the context of who is perceiving those sensations. The world does not have predefined categories, and how you categorize objects in the world depends critically on your point of view. Or as Gerald Edelman puts it:

We cannot individuate concepts and beliefs without reference to the environment. The brain and the nervous system cannot be considered in isolation from states of the world and social interactions. But such states, both environmental and social, are indeterminate and open-ended. They cannot be simply identified by any software description. Functionalism, construed in this context as the idea that propositional attitudes are equivalent to computational states of the brain, is not tenable. ... computer programs are defined strictly by their formal syntactical structure, that syntax is insufficient for semantics, and that in contrast, humans are characterized by having semantic contents. Semantic contents involve meanings, and a syntax does not involve itself to deal with meanings.
(Edelman, Bright Air, Brilliant Fire)2.3.2

6. The Chinese Room

To help you understand what is meant when it is said that manipulation of symbols alone can never produce thought, I would like to discuss a devastating attack on functionalism that was created by John Searle called the Chinese Room experiment.

Well, imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: "Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two." Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called "questions" by the people outside the room, and the symbols you pass back out of the room are called "answers to the questions." Suppose, furthermore, that the programmers are so good at designing the programs and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker. There you are locked in your room shuffling your Chinese symbols and passing out Chinese symbols in response to incoming Chinese symbols. On the basis of the situation as I have described it, there is no way you could learn Chinese simply by manipulating these formal symbols.
Now the point of the story is simply this: by virtue of implementing a formal computer program from the point of view of an outside observer, you behave exactly as if you understand Chinese, but all the same you don't understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese.
(Searle, Minds, Brains And Science)2.3.3

There are counter arguments to this experiment. One is that it is the whole system taken together that produces emergent behavior that understands Chinese. I am a very strong adherent of emergent phenomena. Most of my research is based on that principle. However, this system lacks at least one of the key ingredients to produce emergent behavior. All systems evoking emergent behavior are made up of multiple independent agents that interact in highly non-linear ways. That type of system is not what functionalism is talking about, and thus can not produce emergent behavior. And besides, It just seems downright silly to say that your big brain can not learn Chinese, but the combination of the symbols, rooms, and your brain can. Also, the same people who attempt to use this argument state that Searle is saying that the whole system seems to understand Chinese, when none of the parts understand it. This is not the point he is trying to make. Because there is in fact a part that does understand Chinese. It is the programmers who originally made the rule books for him to follow in the room. What he is saying is that he, the person inside the room, can never learn Chinese by following those rules anymore than a computer can when following rules. The only meaning in the system is that given it by the programmers. And this does not help us one bit when trying to understand intelligence in a larger sense because then you have to ask: "who was it that gave meaning to the program running in the programmers head?" ad infinitum.

It is not easy to see how, in the absence of a programmer, a mechanism could be constructed that would assign meaning to syntactic representations and still preserve the arbitrary quality of those representations, a quality that is an essential part of the functionalist position. But that is our poignant position: we have no programmer, no homunculus in the head.
(Edelman, Bright Air, Brilliant Fire)2.3.4

7. Conclusion

The idea of functionalism is very seductive at first glance. The idea that you really don't have to try and understand how all of that messy gray goo in your head works in order to build intelligent machines that can really think is very tempting. But if you just dig a little deeper, you begin to see some of the inherent problems in this position. The brain is not just an abstract finite-state machine running a program that makes up the mind. The mind is an emergent phenomenon, just like the critics of the Chinese Room experiment tried to claim, but the emergent behavior depends critically on the structure, connectivity, and function of its neurons and neural subsystems. These are the independent, interacting agents that produce highly complex and non-linear output that make up emergent phenomenon.


<< Previous Contents Home Next >>

MindCreators.Com is edited and maintained by David Cofer. If you have any questions, comments, or just want to discuss the contents of this website, then email me at: dcofer@MindCreators.com.

Copyright © 2002 by David Cofer. All rights reserved.