There are two different views, or theses, that are put forth by students of artificial intelligence. The weak thesis proposes that artificial intelligence is similar to human intelligence and can help us simulate and better understand operations of the human mind. This is not controversial. What is more interesting and challenging is the strong thesis, which advances the notion that artificial intelligence is the same as human intelligence. Think about this for a minute. If you were told that your thermostat is the same as human intelligence, even if only in one respect - in that it knows when the temperature falls below or rises above 70 degrees - you would laugh at this notion. But, with artificial intelligence becoming more and more sophisticated - for example, with the computer program Deep Blue finally humiliating and defeating the world's greatest chess player, - you might start thinking again. Notice also that computer technology is advancing so rapidly that no one can forecast what the future might look like in this regard. We are not interested here in predicting the future; we can stipulate as much as we like and we can assume even that AI in the future will be doing as much as is conceivable. We are studying this subject because it helps think more clearly and more deeply about not only AI but, fundamentally, about human intelligence itself.
There is one contemporary school of thoght in philosophy, which supports the notion that artificial intelligence is not different from human intelligence. This school of thought is called functionalism and fits under metaphysics - it is a philosophical theory that studies the nature of reality, what and how many kinds of real things there are, and so on. Remember that the theories of metaphysics we have studied so far as materialism, idealism and [Descartes'] dualism. Functionalism is characterized mainly by its distinction between types and tokens. Here is how to understand this distinction before you can appreciate its significance for metaphysics and for the present debate on AI. The computer you are staring at right now is a token computer. All computers you can ever encounter are token computers. What they all have in common is that they are computers - right? In the idiom of functionalism, all computers are instantiations or exemplifications of a single type - the type computer. The type computer is something you never quite see or touch in itself. In that sense, types are not material - but also notice that functionalism does not becomes dualism by claiming that the types are real things that stand side by side with the all-too-real tokens. The point is rather this: The type causally makes the token meaningful? How is this done? Because the token is an exemplification of a type, the token is in a sense "caused" by the type. Here is what follows from this: When we observe an effect which is meaningful by being subsumed under a given type, this is all we need to know. Do you see the application for AI? If AI makes sense to us as an exemplification of intelligence, this is all we need to know. Here is an example that illustrates this: Suppose you want to beat your computer chess program. What would be a meaningful strategy for you? Picking the right strategy will also tell you under what TYPE it makes sense to subsume the program. If you subsumed the program under the type "computer program", then you should try to beat it by learning as much as you can about its design, the way it was "written" and so on. But a moment's reflection will inform you that this is not the way to go. You should rather concentrate on learning and becoming better at chess - not at computer programming. This means that you are subsuming the chess program NOT under the type "program" but under the type "intelligent chess player." This is the functionalist response. DO you see what follows? There is no room here for assuming anything else besides that the chess program is an intelligenec. What we have here are tokens and types - nothing else. The program you are playing against is the token. What is its overarching type? According to the above argument, the type under which you should subsume the program is "intelligent chess player." Why should you bother doing anything else - and how could you really defend the notion that the program is anything else but an intelligent chess player? This is the functionalist view on the matter. Functionalist philosophers were among the first to suggest that artificial intelligence IS intelligence - the strong thesis, as I called it above.
Others disagree, of course. There are some tough questions to ask here; so, let us embark on this project.
Is the software program related to the hardware in the same way in which the human mind is related to the human body? Notice that this way of thinking, on its face and before you even decide how to deal with it, has something dualistic about it. Indeed, Searle, the author of the essay we read, blames computer scientists who make grand claims about AI for a hidden dualism - they are dualists, they take the mind itself to be something immaterial and they take the software to be not only immaterial but a real and separate thing [remember that dualism is the theory that acknowledges two kinds of real things - material and immaterial]. If, instead, computers scientists paid attention to the fact that human intelligence is a total biochemical configuration, they might have noticed that AI is not a biochemical, or a species-related, capacity. This is of the main points Searle makes. What do you think about this?
There are interesting differences between AI and HI [human intelligence.] Freedom or intentionality seems to be such a great difference. The computer program has been programmed - period - it did not grow spontaneously. The program does not stop between steps to reach a decision as to what to do and how to proceed next. [But, are we certain that this is NOT what happens with the operations of the human mind? How would you make a strong case that human intelligence is associated with, or entails, freedom?]
The computer program manipulates symbols. Is the human mind similarly bent on manipulating symbols it has learned to be representative of things? Even if this is the case, notice that the human mind 'knows' and understands how symbols correspond to things. Does the computer have a similar experience of correspondence? Does the computer program have ITS own independent way of determining that there is a correspondence between symbols and what these symbols stand for? [Again, are we sure that the human mind can indeed check independently of the symbols themselves that there is such a correspondence? This is a difficult and subtle point, so make sure you take a minute to reflect on this.]
In what sense could we say that the computer has a subjective experience of what the symbols stand for? In this respect, study carefully Searle's experiment and the responses, and rebuttals of responses, he recounts in his essay.
What if we were able to replicate the human brain? In that case, the computer, more properly called robot perhaps, would have a plastic brain that would be exactly like that of a human brain. Suppose we know which nerve synapses in the brain fire up when a human being is in pain. Then, we make the exactly same synapses fire up in the plastic brain we have put in a dummy. Should we say that the dummy is in pain? Why? Or, why not? Suppose even that we equip the dummy with the capacity to utter agonizing cries that accompany the firing up of the appopriate synapses? Have we replicated human intelligence in some way? Why or why not?
It appears that there are certain human-intelligence operations that cannot be captured by a systems-language - and, therefore, cannot be consistent with the language of AI. For instance, you know that there is no odd number that is the sum of even numbers. You do not need to keep performing additive operations to reach this conclusions. A computer program would never stop if you asked it to find this kind of number - unless, of course, you went back and reprogrammed the computer accordingly. This is the kind of spontaneous learning that AI seems incapable of. How impressed are you by this example? What does it really show? Does it make a conclusive and definitive case that human intelligence is difference from AI?
How much have you clarified about Human Intelligence by relfecting on the differences between HI and AI?