Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Minds and Computers: An Introduction to AI by Matt Carter
Nicholas Everitt thinks about Matt Carter thinking about computers thinking.
There is now a very wide range of sound introductory texts in the philosophy of mind. Matt Carter’s new book offers something rather different. His opening six chapters include material which will be very familiar to any student of the philosophy of mind: dualism, behaviourism, materialism, functionalism. But his main concern is to outline and defend the possibility of a computational theory of mind. Three chapters outline in a formal, rigorous way a variety of concepts necessary for understanding what computation is, and the remainder of the book aims to show how this formal machinery might be invoked in an explanation of what the mind is and how it works. Carter’s cautious conclusion is that on the one hand there is no objection in principle to the programme of strong artificial intelligence – ie, that there can be systems which display (and so have) mentality simply in virtue of instantiating certain computer programs – but that on the other hand, our best available programs are ‘woefully inadequate’ to that task.
Carter succeeds admirably in explaining why this might be so. The opening chapters will be fairly simple for philosophy students, but the material thereafter will be almost wholly new, and not available elsewhere in such a user-friendly form. For students of artificial intelligence (AI), the book explains very clearly why the whole artificial intelligence project presupposes substantive and controversial answers to some traditional philosophical questions. The book is a model exercise in interdisciplinarity. It’s also written lucidly, with regular summaries of important points. An Appendix supplies a useful glossary of technical terms.
So far, so good – very good, in fact. However, as usual among critics, I want to make a number of critical comments, of increasing weight. The first and least weighty: though Carter sprinkles the text with exercises, he rarely supplies any answers, which students will surely find frustrating.
Next, he doesn’t mention one set of reservations some philosophers have about AI. That is the tendency among cognitive scientists to attribute to the brain certain activities which (so the criticism goes) belong only to the whole person. For example, in accounting for perception, the scientists will speak in terms of the brain receiving messages, interpreting data, constructing hypotheses, drawing inferences, etc, as if the brain were itself a small person. Carter may well find such criticism unpersuasive, but it would have been good to give it an airing, as it has been a significant issue between defenders of AI and their critics.
The third reservation is a matter of substance. Computer programs operate on purely ‘syntactic’ features – ultimately speaking, they depend upon the physical form of the inputs, transformations and outputs. By contrast, human thought is always a thought about something, it represents something, it has a content. It displays what philosophers call ‘intentionality’. One central problem for artificial intelligence is how to get aboutness into computer programs – how to get semantics out of syntactics.
Carter’s answer is to invoke experience. What enables certain expressions of syntax in our heads to represent features of the world is that they are linked with the external world, and the linkage comes about because we experience the world. “In order for our mental states to have meaning [intentionality], we must have antecedent experience of the world, mediated by our sensory apparatus” (p.179). Now this response might be helpful for a computational theory of mind if experience could be explained in purely computational terms. Some philosophers and AI theorists believe that this can be done, but arguably the move is not available to Carter. For earlier in the book he committed himself to an account of experience which seems to preclude a computational treatment.
Carter thinks that all our experiences have a qualitative aspect: that they include so-called qualia. There is something it is like to see the colour red. Visual experience is beyond merely having certain physical inputs in the forms of light waves, undergoing certain transformations in the brain and producing physical outputs such as speaking the sentence “There is something red.” What it is like to be in any given experiential state, says Carter, “can be known only by having the first person experience of being in the state” (p.43). However, if this is correct, it surely cannot be squared with a computational theory of experience. Carter thinks that detecting qualia requires you to have that experience yourself; but there is no reason to think that detecting a particular computer program requires you to be an embodiment of that program yourself. Therefore experience can’t be like a computer program.
Carter tries to avoid the qualia problem by saying that it is not important that we each have qualitative experience unknowable by other people, so long as we agree on which things are red and which are not. But this seems inadequate. If a computational theory of mind requires getting semantics out of syntactics, and this requires a connection with the real world via our sensory experiences, and these experiences essentially involve qualia, we can hardly accept that qualia are unimportant. They are precisely what makes experience experience, and so mind mind.
It seems that Carter is faced with a trilemma. He needs to explain how he thinks a computational account can be provided of qualia; or he needs to abandon a qualia-based account of experience, in favour of some computational account; or he needs to abandon his conclusion that there is no objection in principle to a purely computational account of the mind.
However, it would be unreasonable to expect an introductory text such as this to provide the solutions to these problems. What it needs to do is to give readers a sense of the issues involved. Carter’s text does that extremely well.
© Nicholas Everitt 2008
Nick Everitt has retired from the University of East Anglia, but not from philosophy.
• Matt Carter, Minds and Computers – An Introduction to the Philosophy of Artificial Intelligence, Edinburgh University Press, 2007, 222 pages.