×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Do Computers Have Syntax?

Michael Philips on the question of whether computers can think.

No philosophical argument has inspired as much anger and glee in the last hundred years as John Searle’s Chinese Room Argument. Given the hundreds, perhaps thousands of responses to Searle’s paper, it’s hard to believe that this will be the first to identify the central flaw in that argument. But every defense and criticism I’ve read accepts Searle’s claim that digital computers can’t think because they get their results in the wrong way. They get them by manipulating symbols that are meaningless to them according to rules. This is what Searle means when he says that computers have syntax. Unfortunately for his argument, they don’t.

The Chinese Room argument has two related purposes. It’s supposed to refute a certain test for the claim that computers can think, the Turing Test. And it is supposed to help prove that digital computers can’t think. Searle (and most commentators) think that if it achieves one of these goals it also achieves the other. Not so: although the Chinese Room Argument succeeds against a certain version of the Turing Test, it does not help with Searle’s positive argument against computer cognition.

Refinements aside, the Turing Test says that a computer thinks if we can’t tell the difference between the output of a computer and the output of a person in a wide enough range of cases. The test may be more or less tough depending on how wide is wide enough. The Chinese Room argument is directed against a not very demanding version. It is an argument by analogy, the bare bones of which are as follows. A man who reads no Chinese sits in a room. Sets of symbols are passed into the room through a slot. His instructions are to find those symbols in a reference book, to identify a set of symbols adjacent to those symbols, to copy the latter set of symbols and to pass the copied symbols out through the appropriate slot. Though he doesn’t know it, both sets of symbols are Chinese characters. The set of characters passed into the room asks a question and the set of characters passed out of the room answers that question. Thus, by merely manipulating symbols that are meaningless to him the man in the room manages to respond to Chinese questions with Chinese answers. He passes the Turing Test with respect to the Chinese question/answer game. But he doesn’t do this by thinking. So, Searle concludes, the Turing Test fails.

But this is too fast. Searle’s analogy shows that we don’t have to know the meaning of a set of symbols to answer questions expressed by those symbols. But this isn’t enough to refute even the Chinese question/answer version of the Turing Test. After all, the man in the Chinese Room needs to think to do his job. In particular, he needs to follow instructions, identify symbols, look them up in a book, identify matching symbols and so forth. Ironically, if computers really are like the Chinese Room (or the man in it), the analogy suggests that they can do some thinking too.

This could be handled by a friendly amendment. We can replace the man in the Chinese room with a noncomputational mechanical device that can also match Chinese questions to Chinese answers. I have, in fact, invented such a device. It involves chemical solvents, two conveyer belts, mechanical arms, lights that shine through spaces made in cards, mechanisms triggered by patterns of light and other such Rube Goldberg components. Assuming that the man in the Chinese room passes the Turing Test, my machine can as well. But who would want to say it thinks? So this Rube Goldberg Room is a counter example to this version of the Turing Test.

As suggested, Searle also wants to prove that computers can’t think. His argument is as follows: 1) something is capable of thought if and only if it is capable of grasping meanings; 2) digital computers can’t grasp meanings; therefore, 3) digital computers can’t think. Searle believes that the Chinese Room argument establishes step 2. This is because he assumes that digital computers get their results the way the man in the Chinese Room achieves his (syntactically).

There are two problems with this. The least serious is that the man in the Chinese Room needs to think in order to do his job. He must understand instructions and recognize symbols. Even if he gets his results syntactically, having syntax requires grasping meanings (of some kind).

The more serious problem, though, is that the Chinese Room analogy seriously misrepresents what computers do. Computers don’t literally ‘have syntax’. That’s just a useful metaphor (like “My car is trying to start.”). Let’s get down to cases. I enter key strokes ‘commanding’ my computer to add 5 to 7. According to Searle my computer doesn’t know what ‘5’ means, what ‘7’ means or what ‘+’ means. But it knows that it has been instructed (requested? commanded?) to find certain strings of symbols and to perform certain operations on them. So it searches for those symbols, identifies them, and manipulates them according to instructions. Its final instruction is to splash a pattern of electrons on a cathode ray tube. We identify that pattern as a 12. The computer doesn’t, but it knows it has been instructed to spray it and it can tell the difference between that patterns and other patterns that are within its power to spray.

This is what a computer would do if a computer ‘had syntax’ (like the man in the Chinese Room). But think about it, does my computer literally recognize my keystrokes as commands? (“Better get to work now, duty calls.”) If it knew what a command was, it would have semantics. Does my computer literally search for strings of symbols? (“No, not that one, too many zeros; not that either, I need a 1 in the first position.”). If it knew what symbols were it would have semantics too. It doesn’t search for 1’s and 0’s, identified merely as round shapes and thin vertical lines either. There are no round shapes and thin vertical lines inside my computer. Look for yourself.

The straightforward, literal truth is this. By hitting ‘5+7’ on the keyboard I send a pattern of electrical impulses from the keyboard to the motherboard of my computer. These impulses initiate a causal sequence of electrical events within the machine. These impulses culminate in a final event: a spray of electrons on the monitor that looks like a 12 to me.

It’s well beyond the scope of this article to explain how these causal sequences produce the astonishing results they do. But the basic building blocks of a computer are called ‘logic gates’. A logic gate is just the physical expression (embodiment, mapping) of a logical relationship. Nand gates, for example, are physical expressions of the logical relationship not(and). Just as not(and) has two possible values, true and false, so nand gates have two possible states: things get through them or they don’t. Not(p and q) is true if and only if at least one conjunct is false. So something gets through a nand gate if and only if at least one ‘entry’ to that gate is in whatever physical condition (say, off) corresponds to false. The nand gates in my computer are made of wires and transistors. But in principle, we could build them out of pipes, valves and running water. In general, all we need to build a computer are ‘parts’ that are limited to two states that can be combined into larger wholes that also have this property (where the outputs of the parts determine the outputs of the whole). A system of pipes and valves will work just fine. Each valve can be on or off. And they can be combined into larger systems in the right way.

We have no temptation at all to say that a digital computer made of pipes, valves and running water has syntax. We also have no temptation at all to say that it thinks. So how does the problem of computer cognition get its foothold?

The answer is that we aren’t really wondering about digital computers as such but about our current crop of digital computers. These computers are not made of valves and pipes. They are powered by electricity, which also plays an important role in our brain. Also, they embody (express, map) logical relations and problem-solving strategies electronically. These similarities to our brain make some of us wonder whether there really is an important difference between the activities brains support and the activities our current crop of computers can support. Does the fact that they are made of different stuff imply they get their results without thinking? What about the fact that they may be organized in a somewhat different manner? These are the real questions and Searle’s characterization of computers as mere syntactic devices masks them.

Two races of aliens land in a single spaceship. They make jokes, propose theories, complain of pains and heartaches and recite ecstatic poetry to each other (and, eventually, to us). Half of them have ‘brains’ located in a cavity equidistant from their twelve ‘legs’ and made of different stuff than ours (glicks). But miraculously, their brains are organized in exactly the same way ours are and move from state to state in exactly the same way too (for each of our neurons there is a corresponding glick and for each change in one of our neurons there is a corresponding change in the corresponding glick). The other half have ‘brains’ made of the same stuff as ours but organized in different ways (say, digitally). Neither of them doubt their fellow travelers think. Neither should we (unless we learn something more that gives us reason; for instance, that they are operated by signals from an orbiting ‘mother ship’). The fact is that we have no principled reason for saying a thinking ‘brain’ must be made out of a certain stuff (e.g., protoplasm) and no principled reason for saying that it must be organized in a certain way either. Thus far, we know (very roughly) how our brain works, but that’s it. It follows that we have no principled reason for denying that a digital computer can think. By ‘principled reason’ I mean a reason based on compelling philosophical argument or general scientific law.

Of course, we also have no principled reason to deny that George W. Bush is controlled by signals from an orbiting mother ship. The point is that we have no reason to believe he is. Does this also hold for thinking computers? Well, it all depends on what we mean by ‘thinking’. If thinking is just a matter of producing certain results by means of certain electro-chemical processes, then why not? We have no reason to be sticklers about chemistry. But if that’s all we mean by thinking, we might as well say that home security motion detectors think.

This is not the sense of ‘thinking’ that makes the question ‘Can computers think?’ interesting to most people us. Most of us are wondering whether, or to what extent, digital computers (or any machines we make) can have an internal life like our own. For us, thinking is an activity carried out by a conscious being. This doesn’t mean that all thinking is necessarily conscious, but only that to think – to have an internal life that resembles our own – something must be conscious. The real questions are: 1) what kind of stuff does something need to be made of to be conscious?; and 2) what kind of internal organization does something have to have to be conscious? At present, we have no scientific basis for answering either question. We know what is sufficient from our own case. But we have no principled basis for saying that the stuff of which we are made and the way in which we are put together are also necessary conditions of consciousness.

The best we can do at this point is continue thinking about that age-old problem in the philosophy of mind, namely, how do we know whether something is conscious? Without a test for consciousness, we will never be able to identify conscious beings and therefore, never be able to say what kinds of physical systems support consciousness. It’s not clear whether all supporters of the Turing Test think that test offers the answer. Some of them use the word ‘thinking’ in a sense that would ennoble the thermostat. Others regard the Turing Test as a way to by-pass annoying philosophical questions and just get on with one’s grant writing. But to people interested in the inner lives of machines, the interesting question is whether the Turing Test can be adequate as a test of consciousness. In other words, can there be a behavioral test of consciousness?

The answer is ‘no’. No plausible test for either thought or consciousness can depend entirely on behavior (output). At very least, behavior produced caused in certain ways can’t count. Imagine a human-looking doll cleverly equipped with video cameras, microphones, springs and gears remotely controlled by a human operator with the aid of virtual reality technology. Presumably, such a ‘creature’ could pass a very demanding version of the Turing Test, but who would say it thinks? To be plausible the test must rule out this way of producing the behavior. For instance, it must say that the behavior right now can’t be traced to what some person is doing right now. In that case, neither the doll nor Searle’s Chinese Room would pass the Turing Test. The Rube Goldberg Room, though, would pass the easier versions.

© Michael Philips 2002

Michael Philips is a professor of philosophy at Portland State University in Portland, Oregon. In his spare time he is a photographer and performance artist.


The Turing Test

The purpose of the Turing Test is to find out whether a computer can think. The experimenter sits in front of a keyboard and screen. Cables connect her keyboard to two nearby rooms. In one room sits a human being with a keyboard; in the other room, the cable is connected to the computer being tested. The experimenter types questions addressed to either Room A or Room B. She has to work out from the answers whether she is communicating with the computer or the human. If after many questions she still can’t tell which is which, then the computer is assumed to be able to think.


The Chinese Room

The purpose of Searle’s Chinese Room thought experiment is to show that even a computer which passes the Turing Test doesn’t really think. Imagine a man who knows no Chinese sitting in a room. Cards with unfamiliar symbols on them are pushed into the room through a slot in the wall. He takes the cards, looks them up in a book and in accordance with the rules in the book, he then pushes cards with other symbols out through the slot. The symbols are in fact Chinese characters, and a Chinese speaker standing outside the room mistakenly believes he is having a conversation with the person in the room.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X