×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Thinking About Thinking

A brief introduction to one of the fundamenal questions for Artifical Intelligence research, by Tomas Campbell.

It is commonly said of philosophy that it is a timeless subject; the problems being discussed now are little different from those which concerned the Ancient Greeks more than two thousand years ago. While the essential uncertainty of man’s condition in the universe remains the same, it is worth remembering that philosophy, as with all other subjects, is a product of society. Philosophical discussions do not take place in some sort of abstract vacuum and different societies over the centuries and across the globe have all tackled fundamental questions from radically different perspectives. Practical philosophy is only as static and unchanged as the society around us and the issues which bother philosophers should bother us all.

A good example of how changes in society shape philosophy can be found in the recent scientific development known as Artificial Intelligence. Artificial Intelligence, or AI, is a branch of computer and cognitive science dedicated to the creation of autonomous, independent thought by digital machines. Although still in its infancy (the term AI was only coined in 1957), advances in this technology over the last thirty years have been startling. In a short space of time, AI has become a subject which demands the utmost attention from contemporary philosophers, and needs to be discussed in these pages, for it has enormous ethical, social and philosophical implications. For the moment we will limit ourselves to one of the questions central to AI: is it theoretically possible for a computer to think?

Any discussion on artificial thought must focus on our definition of the word ‘thinking’, something which it is notoriously hard to agree upon. While all of us experience consciousness and are sure that we do think, actually describing what it is can be very difficult. Even with recent advances in science, thinking remains something of a mystery: ask any neuro-scientist what consciousness is and you will be subjected to some distinctly unscientific language and rather vague mumblings. The traditional answer of AI researchers is to brush aside these awkward problems and to use a broad definition of thinking based entirely on what they assume to be the qualities exhibited by a thinking agent. In other words, anything can be described as thinking if it displays the ability to make plans, to learn, weigh up information and to make decisions. If an agent was to give the appearance of doing all these things then, the argument runs, it would be useful and make sense to describe it as thinking, even if it was not actually a human being.

This argument for a descriptive definition of thinking does have its merits. After all, it is only by appearances that I assume my fellow human beings are all thinking creatures, I have no idea what is really going on in anyone else’s mind. It is on this basis that the goal of AI research has for a long time been to manufacture a machine capable of passing the famous Turing Test. This test was outlined by the brilliant mathematician Alan Turing long before AI became a serious academic science in its own right. In keeping with the descriptive definition of thinking, it is a test which is based on observable behaviour. A computer and a human sit in one room answering questions posed by an interviewer in another room who only communicates by typing messages on a keyboard to them both. If the interviewer is unable to distinguish between the computer’s and the human’s answers, then the computer would have passed the test and could usefully be described as a thinking identity. It is worth pointing out that, as yet, no programmer has devised a computer which has passed the test, despite the incentive of a large cash prize.

With this definition of thinking established among proponents of AI, we need to understand on what basis they regard computers as the means of achieving their goal. A computer is a machine which manipulates binary symbols; it does this by electronically switching circuits between an On and an Off state, represented as 1 and 0 in binary code. These symbols are used to represent the external world; when I type my name TOM on a keyboard, the computer is switching electric gates on and off to store binary code corresponding to the letters T, O and M. From the humblest word processor to an aeroplane’s flight control computer, it is simply a matter of manipulating these binary symbols according to the specific functions required. It follows that a super-powerful computer, admittedly beyond the scope of anything in existence at the moment, will be able to build up an accurate internal representation of its environment in binary code. With sufficiently sophisticated programming it should then be able to interact with that environment in an apparently purposeful way. For those that see little similarity between this and thinking, AI proponents point out that the human brain is simply an incredibly complicated symbol manipulator itself, containing billions of interconnected neurons which pulsate between On and Off states as electrical discharges travel across the brain. Every time you waggle your finger, thousands of neurons are firing on and off in a part of your brain, these binary symbols of reality being manipulated in order to perform physical functions.

For those involved in AI research, thinking amounts to nothing more than the manipulation of symbols. Whether these symbols are made of silicon or protein is beside the point; what matters is that thinking is a physical process which can be, in theory at least, described and replicated. This hypothesis is an extremely provocative one which, if its implications are fully followed through, appears to claim that we ourselves are nothing but biological computers, manipulating symbols in much the same way that computers operate. It is a proposition which has been strongly criticised from many different perspectives. I will briefly mention two philosophical traditions which may have problems with the very idea of intelligent computers.

Firstly, a challenge to the concept of artificial thought can be made by those with transcendental beliefs. The philosophical tradition known as dualism claims that there is a separation between the mind and body, and that the mind is entirely non-physical in nature. It is an idea which has had powerful proponents over the years, most famously Descartes. For dualists the mind is elevated above material phenomena; it is a transcendental entity, and no amount of physical engineering could ever hope to create thinking.

In direct conflict with dualist beliefs, there is the philosophical tradition known as materialism. For materialists, what we call consciousness is merely the product of biochemical reactions in the brain. Or, in the words of a Virginia Woolf character, “The mind is meat” and hence inseparable from our bodily flesh. Today, living in a post-Darwinian society, many would sympathise with this view and regard the human mind as a product of evolution. We are all primates, simply with larger and more sophisticated brains than our monkey relatives; any notion of some supernatural mind is the height of human arrogance. It is this materialist perspective which all AI researchers must adopt if they are to have any hope of creating thought since, by its very definition, the transcendental mind cannot be physically created. Yet it is quite possible to reconcile materialist beliefs with a strong refutation of AI. From a materialist viewpoint, it can be argued that thinking is a biological phenomenon and the term can only be used accurately if it is describing a living organism. The mind, as we experience it, can only function meaningfully in relation to its physical body and a computer has as much chance of thinking as it does of breathing. In particular, the capacity to think evolved alongside our capacity to feel emotions. The two phenomena are deeply interconnected biological processes and an agent can only be described as thinking if it does so within an emotional context. Human thought has developed alongside emotions such as love, jealousy, anger and sadness because they serve a definite purpose in assisting our survival and the successful reproduction of our genetic code. For humans, there can be no such thing as consciousness completely divorced from its emotional feelings; it is simply not the way that our brains function.

For both the transcendental and materialist conceptions of what it is to think, it is clear that the descriptive definition used by AI proponents is inadequate on a philosophical level because it does not fully express what we ourselves experience in thinking. While it is still a possibility that computers might be able to learn, reason and generally behave as intelligent beings, it does not follow that we should ever describe their behaviour as thinking. The important distinction needs to be made that, where as it may be useful to describe a computer as thinking, it is not necessarily accurate. After all, we all understand and regularly use sentences such as “My car decided to break down at exactly the wrong moment” but no one who reads this really believes that my car made the conscious decision to stop working; it is simply a means of conveying what happened in a slightly colourful and metaphorical manner. In the same way, although we might describe a computer as having a mind of its own or being extremely clever, this does not mean that such a description is accurate; we should not make the mistake of believing our own metaphors. Even if we reject the idealist position, the point remains that a computer will never be able to think because it is not an evolved organism with its own cognitive and emotional interdependency. However much we talk about cars, computers and calculators as thinking entities, to extend the term to anything nonbiological may be a philosophical error.

© Tomas Campbell 1997


AI News

• A team of British AI researchers from Sheffield University has won this year’s Loebner Prize for the most human computer.

Hugh Loebner, an American academic, put up a prize of $100,000 for the first computer to pass the Turing Test. In the meantime, a smaller prize is awarded annually at a contest held in New York to determine the ‘most human computer’

The Sheffield team’s program, called Converse, gossiped with the contest judge about Bill Clinton’s attitude towards lesbians.

• The world chess champion Garry Kasparov was beaten by a new IBM supercomputer “Deep Blue” in a tournament held in New York’s Equitable Center. This is the first time that a computer has beaten a world chess champion. The games were relayed live on the Internet.

There was controversy about the second game in the tournament. Kasparov resigned, convinced that checkmate was imminent. However, later analysis showed that he could have forced perpetual check and therefore drawn the game.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X