×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Cleaning out the Chinese Room

Could computers have minds? Julian Moore criticises John Searle’s famous thought experiment.

Thinking machines, particularly thinking computers, are not inconceivable: we seem happy to grant minds to Commander Data of the Starship Enterprise, Marvin, the Paranoid Android from the Hitch Hiker’s Guide to the Galaxy, Kryten of Red Dwarf, and even the Terminator, to name a few fictional examples of programmed devices. At the very least, we are willing to suspend our disbelief, even though we cannot begin to build anything like them. Whether our inability will be overcome by time and technology, or whether it is really a matter of philosophical possibility is an intriguing question. We are now so familiar with computers and with the break-neck pace of their development, that a thinking computer seems almost inevitable. But there is as yet no proof either way.

I believe that whilst we are unlikely to demonstrate it soon, it is possible for a digital computer to have a mind. There are many others who believe the contrary. One of the best known, and most informative arguments on this topic is John Searle’s ‘Chinese Room’, which seeks to demonstrate that a digital computer could never do what we do with our brains. But before entering the Chinese Room, it is worth examining its foundations.

Where are we now?

The beginner’s first program in the ‘C’ language traditionally results in one line of output. It says simply “Hello world.” The program runs, prints, and exits.

It’s a very basic start, but a moment’s reflection shows that it is nonetheless an impressive achievement. It requires a remarkable combination of electronic devices – processor, memory, disk, video drivers, etc. – to interact reliably and predictably under the control of an equally remarkable assembly of software – operating system, compiler and so forth. Such an achievement is possible because we human beings have an ability to dissect problems into manageable chunks, and then to generate procedures, or algorithms, for implementing special processes to solve those problems. As a measure of how good we have become at this, with a moderately fast PC, several dozen megabytes of memory, and the right software, we can now talk to our computers and have them follow simple spoken instructions or turn our speech into text.

We may say casually that such a system ‘understands’ what we say to it, but no one expects to have a real conversation with their computer. What we wish to know is whether, given the exponential growth in computing power, and hence the ability to run increasingly complex algorithms (and to run them faster), there will come a time when our computers will understand, will be able to converse with us intelligently.

Computers have steadily improved in their ability to carry out simulations of the things we can describe objectively. Consequently they have been extremely successful at simulating flying aircraft, the weather, the behaviour of skyscrapers in high winds, and so forth, but none have been able to simulate one of us, the way we think, the way we understand the world. There is an argument that even if they could simulate our understanding, they would not possess it. After all, rain in a simulation of the weather does not result in the computer becoming wet, but this argument misses the point. Simulations of physical reality are not themselves physical, but when the subject of the simulation is nonphysical, such as the manipulation of symbols, then to posit a distinction between the thing or process simulated and the thing itself is more problematical: for example, how would one ‘simulate’ addition, without actually carrying it out?1 A digital computer might possess a mind simply by virtue of running the right type of program, if mind is of a similar nature.

There have been numerous attempts to simulate simple ‘conversation’ via the keyboard. In 1965 Joseph Weizenbaum created a program called ELIZA, which would engage in ‘dialogue’ through keyboard and screen; ELIZA played the role of a ‘non-directive therapist’ with people adopting in the roles of ‘patients’. Although such programs have been developed to fulfil serious research goals, for the casual interlocutor, the results can be amusing. Once upon a time, I almost ‘persuaded’ ELIZA that it was mistaken, that I was the therapist and it was the patient. But because ELIZA was really only manipulating in a simplistic way the words and phrases supplied by me and its programmers, it was not persuaded in the normal sense of the word: it did not genuinely understand.

Does ELIZA demonstrate some fundamental limitation? Could we create and program something that understands, gives meaning to its input and embodies meaning in its output – a truly intelligent machine? These are the goals of some researchers working on Artificial Intelligence, but are they realistic? Before we can begin to answer, we need to understand better what ‘understanding’ and ‘meaning’ mean.

In this arena, the key-words are ‘semantics’ and ‘intentionality’, which reflect the ideas that our utterances mean something, and that our thoughts are about something. What we are asking is whether a computer could have intentionality given a clever enough program.

The question reflects the continuing perplexity of the mindbody problem. We generally accept that the seat of mind and consciousness are our brains, because even though we cannot ‘feel’ our brains, if they are affected by physical damage, chemical interference or electrical stimulation then so are our minds. With our knowledge of biology and chemistry we have been able to observe the neurons of our brain, to monitor the chemistry of their firings and interactions, and to correlate the activity of relatively large groups of them with certain types of mental activity. The brain was once an amorphous lump, but now we ascribe particular functions to certain regions, and it is seductive to think that we could extend our understanding still further to comprehend the contributions of individual brain cells. Such an understanding allied with the power of computer simulation suggests we could in principle simulate our own brains, and if we simulate all relevant causes, the effects should follow automatically.

Unfortunately for brain simulation, the completeness of knowledge is inevitably doubtful – Ockham’s Razor (go for the simplest explanation) is a tool of convenience, not necessity. The omission of some aspect of neuronal function or interaction might prove crucial, in which case the simulation would not accurately reflect the operations of our brains. Even when phenomena are well understood, the skill of computer simulation lies in disentangling cause and effect, in recognising which features of the thing to be simulated are intrinsic to its behaviour, and which are incidental. To be strongly non-technical, it is all rather iffy.

There is however a more serious problem with simulating a human brain: practicality. There are hundreds of billions of neurons in our heads, each of which may connect with thousands of others. We simply do not have sufficient hardware or time to simulate the brain in all its glory.

Nonetheless, the deterministic behaviour of brain cells (neglecting the quantum mechanical influences favoured by Roger Penrose in The Emperor’s New Mind and Shadows of the Mind) is powerfully akin to the deterministic behaviour of computers. The analogies between individual neurons and transistors, and the complexity of brains and computers allows us to imagine that our brains are computers – albeit biological instead of electronic, running acquired or evolved programs instead of externally supplied algorithms. We may be led to think that the brain works like a digital computer, and that if we could program a silicon-based computer appropriately we would have an artificial mind.

This is the view that Professor John Searle of the University of California (Berkeley) characterised as ‘Strong’ Artificial Intelligence, or just ‘Strong AI’, as distinct from the view that the brain is not a computer, but that computers may be useful tools for modelling and testing hypotheses about the brain, which can be called ‘Weak AI’.

The vehemence of the debate surrounding computer minds echoes the furore of the Copernican revolution, which marked the end of the geocentric Ptolemaic universe. Strangely, nearly two thousand years elapsed between the time Aristarchus (d. 230 BCE) first proposed the heliocentric theory, and the great work of Nicolaus Copernicus (d. 1543) On the Revolution of the Celestial Orbs, which revived the debate. That heliocentrism should be violently opposed in the 16th century after languishing in neglect for so long, says as much about the prominence of the church at that time as it does about the emerging idea of undogmatic scientific investigation. It was becoming conceivable that science, powered by new minds and equipped with new tools, might actually provide objective answers, and so fatally undermine a cherished view of our place in the firmament.

Artificial Intelligence is in a not dissimilar state. It has dogmas and dogmatists on every side, and more fervour and antagonism than seems usual in philosophy; the debate is also of interest to the public at large for whom Frankenstein’s monster is seen as more likely to emerge from the computer laboratory than from the morgue.

AI seems to be in need of its own revolution. Some of the antagonism towards the Strong AI position may perhaps be explained by the concern that, having lost our place at the centre of the physical universe, to surrender the uniqueness of our minds would be to lose our last claim to any privileged position in the cosmos. If we should discover, create or admit the possibility of a computer mind, many beliefs would be questioned, and some would inevitably fall by the wayside.

The questions are legion. What would be the relationship between the mind of a machine and God? Would an intelligent computer have a soul, and if so, when does it come into existence? What happens to a machine’s soul if the machine is switched off – or its operations simply suspended indefinitely; are androids destined for Kryten’s silicon heaven? And on a more practical note, what rights should such a mind be granted, indeed what rights might it demand for itself?

In the debate on artificial intelligence, Searle and Penrose epitomise the Ptolemaic view: Penrose specifically invokes quantum mechanics, Searle calls on more general, peculiar, and otherwise inexplicable ‘causal powers’ of the human brain to account for our minds. Strange ‘powers’ and stranger quantum mechanics may be the epicycles of the anti-Strong-AI position, convenient fictions to explain the otherwise incomprehensible. The hope is that a better understanding of mind will sweep away confusion, to confound the sceptics or disappoint the optimists.

Nonetheless, one of the great strengths of the Chinese Room argument is that it successfully highlights a number of philosophical issues, and challenges our understanding of mind, even though I do not think it succeeds in dealing a death-blow to the idea of computer minds.

The Chinese Room

Searle created the thought experiment known as ‘Chinese Room’ in order to explore the relationships between computers and programs and brains and minds, with a view to demonstrating that intentionality cannot arise from even the most sophisticated computer program. The Chinese Room analogy remains valuable because it is so easy to understand and provides easy access to key issues – in contrast to Penrose’s highly mathematical arguments on algorithms in general. It has stimulated such a volume of debate that it is impossible to do justice to all points of view here, but perhaps I can give an idea of the shape of the debate.

As described by Searle himself, the Chinese Room comes in at least two forms, and in dealing with them one needs to be careful about which version is being referred to. I shall refer to the less technical version, as used by Searle in his 1984 Reith Lectures, whose essence is this.

Instead of a computer and its program, imagine a locked room containing a man (the ‘agent’) whose native language is English and who neither reads nor speaks Chinese. He has a supply of Chinese ideograms, and a rule book in English which tells him how to manipulate them. From time to time, a batch of characters is passed into the room; he refers to his rulebook, examines the characters he has received and passes back another set constructed according to the input and the rules.

Unbeknown to him, the batches of Chinese writing he receives periodically are in fact questions posed by native Chinese speakers and writers, and so they call the batches he gives back ‘answers’. Furthermore, the rules he has been given are so sophisticated that they cannot tell their hidden correspondent is not himself a native Chinese speaker. He seems to understand the questions3, because they understand his answers. But the man in the room has no idea what is going on.

The fundamental question is whether the fact that the Chinese Room’s answers are as good as those of a native speaker is sufficient to say that it truly understands. Searle’s answer is unswervingly “No”. The reason he gives is that if we could somehow learn to recognise all the symbols, memorise the rules and carry out all the operations in our heads – without the rest of the paraphernalia of the room – we would still not understand Chinese. It follows that if we cannot understand, no computer that carries out such a program can truly understand either – however convincing it appears; QED. The thesis is that no amount of processing or programming ingenuity can bridge the gap between syntax and semantics.

Prominent replies to the Chinese Room include:

The Systems Reply: The agent doesn’t understand, but the whole system does.

The Robot Reply: The agent would understand if only the system were embedded within a robot that could interact with its environment.

The Brain Simulator: Reply The system needs more: simulate the actions of neurons and synapses and use parallel processing as necessary.

and several pick-and-mix combinations of the above.

Searle says that the Systems Reply fails because “There is no way that the system can get from the syntax to the semantics. I, as the central processing unit have no way of figuring out what any of these symbols mean; but then neither does the whole system”, There are further objections to the implications of the Systems Reply: it might imply that one’s stomach understands certain things because it receives certain inputs, ‘processes’ them (in ways which might be described by a computer program) and provides output – but these are relatively minor issues.

The Robot Reply is said to be similarly flawed because whatever the interface to the environment, all input ends up as equally incomprehensible Chinese symbols to be processed according to the rules, and so the objection to the Systems Reply is still applicable.

Simulating the brain is no good either apparently. Embroidering ad absurdum, Searle suggests that we carry out the simulation on an hydraulic, rather than electronic, computer. Its digital credentials can be made equally strong with binary valves (open or shut) operated by an agent following rules in English according to received Chinese characters. But where would understanding reside in such a system, he asks? The inference is that such a system cannot possibly entertain understanding – but is this not begging the question?

Building on some of the ideas associated with the Chinese Room, a further thought experiment illustrates some potentially awkward consequences. Suppose that, armed with the latest and greatest scanner technology my living brain is examined in situ, and to my horror I find my neurons apparently under the control of an alien quantum homunculus busily shuffling alien runes, activating and de-activating the chemical channels of my neurons in a delightfully digital fashion. I have no reason to expect that it understands English, but would my understanding of my native tongue evaporate the moment it becomes apparent the firing of my neurons is being dictated by some alien rule-book? Would I wonder whether I ever truly understood English? Or, since I seem to understand English would I be logically compelled to endow the alien with equal understanding? None of the implications seem reasonable.

But let us dig deeper. What would happen if somehow my alien homunculus could be swapped with that of an equally brave Chinese speaker? Would I cease to understand English and suddenly become fluent in Chinese? I think not; my belief in understanding is that it belongs to me and not to my brain, although my brain plays a vital role. I see no reason why I should lose my understanding of English, or doubt that I understand, simply because I have uncovered some unsuspected aspect of its operation – such as an alien invasion.

Given that most objections to the Chinese Room argument supposedly fall into the yawning gap between syntax and semantics we ought to enquire into the nature of this gap: how do meaning and understanding arise?

Searle himself summarises the Chinese Room argument in simple steps:

1. “Programs are entirely syntactical.
2. Minds have a semantics.
3. Syntax is not the same as, nor by itself sufficient for, semantics
4. Therefore programs are not minds. QED”

The problem here is in step 3: why is syntax insufficient for semantics? I don’t know, unless ‘syntax by itself’ specifically excludes external input. Searle dismisses many replies that invoke external input, stating that it simply reinforces his point that formal symbol manipulation alone is insufficient. The key question of the syntax-semantics problem is: how does meaning arise?

It seems to me that ‘understanding’ requires external references. No symbol in a permanently isolated system can be ‘about’ anything except another symbol, or group of symbols. Since understanding and intentionality are things that we grant to others on the basis of observation (our question is whether we would grant these things to a computer mind), we need common reference points. Without a shared environment, there could be no meaningful observation, let alone communication.

If excluding contact with the environment necessarily excludes the possibility of understanding – regardless of the mechanism, causal powers or other attributes of the system – no inferences about such attributes can be made from an isolated system such as the Chinese Room. Some contact with the environment to establish reference points seems essential. Such contact need not be continuous: I could envisage complete sensory deprivation but would still expect to retain my intentionality – it would after all be different only in degree to lying still with my eyes closed in a darkened room. Here the point is that once intentionality and semantics have arisen, they may persist, but they cannot arise ex nihilo.

Having said earlier that I believe a digital computer could possess a mind, if there is a dependence on external references, the converse is that even a human brain might not possess a mind if it has always been isolated, regardless of any special ‘causal powers’ it may possess or the intrinsic quantum mechanical nature of its construction.

There is however another consideration: who or what does the understanding? In the case of motor skills, we might be willing to say “My brain knows how to do something, but I don’t”, because we cannot always articulate or explain what we are doing and why. For instance, I can throw a ball, my brain knows how to – and in one sense therefore, so do I – but I could not provide instructions on how to throw that ball in terms of which muscles to exert when, and for how long, according to the distance and direction of the target. The best I might achieve is: “Like this…”

But when it comes to meaning, i.e. ‘knowing that…’ as opposed to ‘knowing how to…’, we would never think to say “My brain understands, but I don’t”, and we always say “I understand” not “My brain understands” – it is not, after all, some thing with which we communicate. Whilst we must be careful not to be misled by our use of language, insofar as the Chinese Room embodies just such implicit reasoning, this suggestive alternative illuminates the problem.

When one hears talk of ‘the computer’ understanding or possessing intentionality, one should take care: if ‘the computer’ means just the hardware itself, it may not possess understanding, but this does not exclude the possibility that it ‘contains’ a mind which does. The mind need not be a physical thing, nor need it be immaterial in the spiritual sense. Just as waves of pressure in the air are sound, so the mind might simply be the patterns of thought that occur in the brain.

And so we return to the mind-body problem: it makes most sense to say that understanding occurs in the mind, and the most serious omission from the Chinese Room is due consideration of this. But if anyone should ask me “Where is your mind?”, until the revolution comes my only mildly facetious answer must be simply “Here.” Until the revolution does come, I think it remains a distinct possibility that artificial minds could be created on digital computers.

© Julian Moore 1998

Julian Moore has a degree in physics from Bristol University. Whilst still functioning as an IT consultant, he considers himself primarily a poet and philosopher.

1 Actually, it is very easy if the set of numbers that can be added is limited: one can create a look-up table which is a two-dimensional array. The numbers to be added are used as row and column indices and the intersection of the two provides the pre-computed result of their addition. However, such an approach does not work for arbitrary numbers.

2 Searle, John R. (1984) Minds Brains and Science. The 1984 Reith Lectures. See p32 – this later version of the Chinese Room omits the “script, story and questions” labelling of batches of kanji passed into the room, thereby so broadening the claims he makes that it becomes even more difficult to agree.

3 ‘The room’ seems to understand, or there seems to be understanding – the Chinese outside need not know or believe there is anyone in the room, but it would be the most natural assumption. A different line of enquiry follows if we grant them the possibility that there is no one inside – would they be willing to grant understanding in this case, is it not the fact that there is a man inside which gets the argument off the ground in the first place?


Semiotics

Semiotics, the study of signs, has three parts:

syntax: ‘combined order’: the arrangement of words in a language (ie do verbs go before the noun, as in English, or after it, as in Hebrew)

semantics: the study of meaning

pragmatics: the study of the reasons for, and the effects of, utterances with meaning

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X