×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Designing Androids

Antoni Diller says that robots must be taught how to learn.

A great deal of effort in Japan, the United States and various European countries is being put into several competing projects to design and build humanoid robots. The ultimate goal is to produce machines that can interact with human beings in a meaningful way, as do the androids depicted in science fiction. For example, Lieutenant Commander Data, although lacking certain social skills and unable to experience emotions, is a full member of the crew of the Starship Enterprise in Star Trek: The Next Generation. Not only does he carry out his duties as well as, and sometimes better than, his human colleagues, but he also interacts with them socially. He plays poker with his fellow officers, gives violin recitals to the crew and in one episode is even seduced by a human crew member.

In the real world, we are still a long way from being able to build a robot with Data’s functionality, but considerable progress has already been made. Peter Menzel and Faith D’luisio, in their lavishly illustrated book Robo Sapiens: Evolution of a New Species (MIT Press, 2000), provide a whistle-stop tour of the major centres engaged in android research around the world and describe many of the robots that have already been constructed. (Additional technical information can be found at the www.ArtificialBrains.com website.)

One of the best-known humanoid robots, because it has been featured in many television documentaries, is the Honda P3 (short for ‘Prototype 3’). This was developed by a team led by senior engineer Masato Hirose, at the Japanese car manufacturer’s Wako Research and Development Laboratory. The P3 robot looks like a man wearing a giant spacesuit, but is entirely mechanical and electronic. It can walk, climb stairs and open doors.

The DB or Dynamic Brain robot, developed at Japan’s Advanced Telecommunications Research Institute under the leadership of Mitsuo Kawato, currently has no legs, but it can juggle and dance, from the waist up, the Okinawan folk dance Kachashi. It is interesting for many reasons, not least of which is the fact that it learnt to dance by imitating human dancers whom it saw on video.

Many of the people developing androids think that it is only a matter of time before fully-functioning humanoid robots will be manufactured. Professor Junji Fursho of Osaka University, whose robot ‘Strut’ is perhaps less impressive than those already mentioned because all it can do at present is stand upright, thinks that in about a hundred years androids with human-like abilities will be a reality.

Ihave given details of Japanese androids because most of the research to develop human-like robots is currently being done in Japan. There are two main reasons for this. The first is that the Americans dominate the personal computer market and the Japanese don’t want them to also dominate the android market, which they think will be at least as important. The second reason is that Japan has a large and growing proportion of elderly people and many roboticists there envision their androids working as carers for this group of people, thus freeing up more humans to do economically productive jobs.

Although a great deal of effort is being put into solving the problems involved in giving androids various human-like abilities, none of it is currently being put into the study of one crucial human ability. Researchers are making great strides in getting androids to walk on two legs, to climb stairs, to grasp objects without breaking or dropping them, to recognise faces and a variety of physical objects and to imitate what they see human beings doing. It is hard to make robots that can do these things, and I have no desire to belittle what has already been achieved. In my view, however, even if a robot succeeds in doing all these things as well as a human it will still lack at least one essential human ability, namely that of learning by accepting what other people say and believing what they have written.

Why Robots Need to Learn How to Learn

In order to be a fully-functioning member of a complex, technological society, a human being needs a vast amount of information, and an android wanting to live in such a society would be in the same position. (I also think that it is necessary to have a lot of knowledge in order to live in any human society, but it simplifies my argument to only consider the sort of society that I and most of my readers inhabit.) Some of the knowledge needed is what sociologists call ‘stock’ or common sense knowledge. This relates to those activities that most people engage in at some time or other; such things as managing money, shopping, looking after a home, travelling and going on holiday. Take shopping, for example. This involves knowing about money, ownership and its transfer, what various items on sale are for, how to make a purchase and how to transport the items bought.

Most people also possess various kinds of specialised knowledge relating to matters such as how they earn a living, their hobbies and other leisure pursuits. Anyone seriously interested in gardening, for example, needs to know a lot about plants, their diseases and upkeep, types of soil, planting times, favourable weather conditions and so on.

It is impossible for any one individual to acquire all this information by himself through perception or observation. The sheer amount of information needed ensures that much of it has to be obtained by accepting what other people say and believing what they have written. It isn’t surprising, therefore, that people in a complex, technological society like ours go through an extended period of education and enculturation which enables them to become fully-functioning members of that society. (During this process people acquire much more than just propositional knowledge, but I have decided to focus on this because of its importance.)

As I’ve already mentioned, an android designed to live in human society would need as much information as a human member of that society. Some of this could be pre-programmed, but, like a human being, the android would also have to be able to acquire further information on a daily basis even to engage in such everyday activities as listening to a weather forecast. If we are ever to produce androids that can interact meaningfully with human beings, then they will have to be equipped with a means of learning by believing some, at least, of the assertions that they encounter. Before we can give an android this ability, however, we first have to understand it in its human form.

So why is it that researchers trying to fabricate androids, and also those working in Artificial Intelligence, have overlooked the need to incorporate in their machines the ability to learn from other people’s assertions? I think that it is because they are working in a scientific and philosophical tradition that downplays this ability while at the same time exaggerating the importance of perception in acquiring information. This is the empiricist tradition, which can be seen as having its origins in the work of Francis Bacon and which was refined by many philosophers including David Hume, John Stuart Mill, Bertrand Russell and the logical positivists of the Vienna Circle. Its most recent manifestation is analytical philosophy. The core of empiricism is precisely the belief that all our knowledge is derived ultimately from the evidence of our senses. A good example of how the assumptions of this tradition affect robotics can be seen in the work of the philosopher John Pollock who has theorised about android construction. His book Cognitive Carpentry (MIT Press, 1995) is subtitled A Blueprint for How to Build a Person and in it, on page 52, he states, “The starting point for belief-formation is perception. Perception is a causal process that produces beliefs about an agent’s surroundings.” He analyses perception in great detail and at great length and considers how an android could acquire beliefs by means of it, but he fails to analyse our ability to learn from others’ assertions. In this he is typical of most epistemologists who, on the whole, say very little about testimony and tradition. Although the empiricist tradition has its origins in Western philosophy and science, it also affects Japanese research in robotics. Japan has a long and distinguished philosophical tradition of its own, but this has had little influence on Japanese science and technology. These are largely developed in the context of Western modes of thought.

Even a diehard empiricist, who believes that all empirical knowledge is ultimately derived from sense experience, must admit that all such knowledge cannot be derived from his or her own sense experiences alone. Most of it must come from other people’s sense experiences, but such knowledge, to our original empiricist, is testimony transmitted through at least one intermediary. Whether or not he accepts it depends on many factors including the trustworthiness of the intermediary or intermediaries concerned. Thus, even the empiricist must concern himself with such issues as the character of the person conveying the knowledge, the manner in which he conveys it and whether or not he has some vested interest in what he is saying — all of which may influence the empiricist’s decision to accept the information being transmitted.

Of course, there is a perceptual element involved in acquiring information by accepting what another person says or has written, but this simply relates to the medium that conveys the message. The same content can be conveyed in many different ways. It can be spoken, written, signed (using sign language) and so on. The perceptual garb of the message is irrelevant to its content.

How Can We Tell What to Believe?

I hope I’ve convinced you that most of our knowledge comes from what we read and hear, but clearly we don’t believe everything we read or hear. So how do we choose what to believe and what to disregard? Although the epistemological significance of how we learn from other people’s assertions has been neglected by philosophers, it has not been entirely ignored. A philosopher who has influenced my own views on the subject is H.H.Price. Chapter 5 of his book Belief (1969) concerns how we learn things from other people. Most of Price’s discussion centres around the awkwardly-phrased principle, “What there is said to be (or have been) there is (or was) more often than not.” At one point, however, he says that there is another way of interpreting this principle, namely as a maxim or methodological rule, which he expresses as, “Believe what you are told by others unless or until you have reasons for doubting it.” Although Price’s rule goes some way in capturing the way in which we evaluate the assertions we encounter, it does not take into account the epistemological complexity of what goes on. It lumps together and treats in the same way every assertion that we encounter. It doesn’t, for example, distinguish between different sources of information and, in the case when we are listening to someone, it doesn’t take into account the social context in which the communication takes place. It is supposed to apply both to the answer we receive when we ask a stranger the time and also to the assertions made in an article published in a learned journal. It encompasses the gossip we hear from a friendly neighbour and also the information presented at a lecture at a prestigious conference. In fact we do treat the information we receive in different ways depending on its content, the source of the information and the context in which it was received. Price’s rule doesn’t take this into account. Furthermore, although Price is aware that people do change some of their beliefs, he tries to capture this by qualifying his rule with the phrase “unless or until you have reasons for doubting it.” As I shall explain, I think that it is more accurate to see belief-formation as being a two-stage affair and it is in the second stage that belief-revision takes place.

I propose that our acceptance or rejection of the information that we receive from other people is governed by the defeasible rule to believe what we read and to accept what other people tell us. I shall call this the acquisition rule. The idea of defeasibility is understood in various ways, so I need to explain how I use the concept. I will do this by means of a legal example. A defeasible law is one that usually governs our behaviour, but can be overridden in special circumstances. For example, most countries have a law against murder. However, in a country that permits euthanasia under certain circumstances a person is not guilty of murder if those conditions obtain. The law prohibiting murder does not apply in those cases, because it has been overridden by the law allowing euthanasia. The law against murder is, thus, defeasible.

So what is the point of my acquisition rule, if it is defeasible? It may not appear very interesting or helpful, but it turns out to be exceptionally fruitful precisely because it forces us to investigate the cases in which it is overridden, and that investigation reveals that the way in which we acquire information from other people is exceedingly complicated.

I am not suggesting that we never end up with false beliefs as a result of employing the acquisition rule. We need so much information that we don’t have time to check all of it thoroughly. The acquisition rule enables us to deal efficiently with the vast amount of information we encounter daily, but its down-side is that it sometimes lets through false information. In fact, I hold that belief-formation proceeds in two stages. In the first we acquire beliefs through perception and from others and in the second we thoroughly check out some pieces of information that are particularly important to us. A scientist working on BSE (‘Mad Cow Disease’), for example, would not simply accept on trust, as the rest of us might, that BSE is caused by a prion rather than a virus. Because of its importance to him, he would evaluate the evidence very carefully before coming to a decision.

The acquisition rule can be seen as a more modern and more accurate version of Price’s methodological rule. It takes us some way in understanding how we acquire information from testimony and tradition, but in order to complete the picture we would need to have a better grasp of how defeasibility operates in this area and an account of those situations in which the acquisition rule is overridden. I think that it is premature to try to formalise the defeasibility involved because we still do not have a good informal account of what is going on. The creation of such an account is one of the things that I am trying to do.

There are, no doubt, several ways of systematising the cases when the acquisition rule is overridden. I propose that we first distinguish between various sources of information. Thus, I suggest that we study separately the situations in which we receive information from another person, from reading a book, from reading an article in an academic journal, from a newspaper, from listening to the radio, from watching television, from the Internet and so on. I am not suggesting that these sources are necessarily mutually exclusive. The same article, for example, could appear both in a newspaper and on the Internet. It is useful, however, to consider together, for example, all cases of how we evaluate information found in books.

The next thing that needs to be done is to analyse each of these sources of information in such a way as to discover different aspects of them that may cause us to override the acquisition rule. I do not have space to show how this can be done for every source of information, so I will just look at the case when we receive information by listening to someone speaking. I do not want to suggest that no philosopher has said anything useful about this matter. For example, in section X, ‘Of Miracles’, of An Enquiry Concerning Human Understanding (1748) David Hume mentions various considerations that we take into account when assessing the truth of other people’s testimony. He says that we consider the character of the person involved. If he is of doubtful character, then we do not necessarily accept his testimony. We consider whether or not the person has an interest in what he tells us. We also take into account the manner of the person’s delivery. If he either hesitates or presents his testimony with “too violent asseverations”, then this may arouse our suspicions. This is helpful, but no one has yet investigated the subject systematically and exhaustively. (I have made a start in several published papers that are also available on my website at www.cs.bham.ac.uk/~ard)

When listening to another person we can group the factors that may cause us to override the acquisition rule into the following five categories: factors relating to the social context in which the assertion is made; those relating to the speaker; his manner of delivery; the content of the assertion and those relating to the recipient of the message. To illustrate the complexity of what we are dealing with, I will give some examples. There may be features about the social context in which an assertion is made that makes us wary of accepting it outright. For example, assertions can be uttered in the course of a theatrical performance of a stage play; during the pitch made by a market trader or some other kind of salesman; by a member of a religious sect as he tries to convince us of the truth of his faith; during the role play that is part of some counselling training; by the defendant in a court of law trying to convince the jury of his innocence; during a government press briefing and so on. In each of these cases we may decide to override the acquisition rule. In the case of the market trader the overriding rule may be along the lines of, “Do not believe everything a person trying to sell you something says.”

We may know things about the assertor that make us wary of accepting his assertions. For instance, he may have a reputation as a liar. Or, he may be speaking about a topic of which he knows nothing. He may have an agenda in speaking to us — being, say, an election candidate who wants our votes.

There may be things about the manner of an assertor’s delivery that make us wary of accepting what he says. For example, people often make more eye contact than normal when lying.

The content of a message may make us wary of accepting it. An extreme case would be an inconsistent assertion, but there are also things that people often lie about. Some of these, such as “It’s in the post” and “I will leave my wife”, have passed into folklore.

People vary in their readiness to believe what they hear. Some are more credulous than others and some are more knowledgeable than others about particular subjects.

This brief account of the epistemological issues involved in assessing spoken assertions should make the reader aware of the complexity of the task involved in giving an android the ability to learn from others.

Conclusion

A huge amount of time, money and effort is being put into several projects whose goal is to build an android that can interact meaningfully with human beings. Many difficult problems have to be solved before this goal can be achieved, but there is at least one problem that no one is working on and that is the problem of giving an android the ability to acquire knowledge from other people’s assertions. Before this ability can be programmed into an android it first has to be understood in its human form. I have put forward a programmatic solution, namely that our acceptance of others’ assertions is governed by the defeasible rule to believe them. In order for this proposal to be developed sufficiently for it to be incorporated into an android, a lot of research needs to be done. We will need a theory that encompasses all the various situations in which this rule is actually overridden, and when that has been achieved, we will need to formalise the kind of defeasibility that is involved.

© Dr Antoni Diller 2003

Antoni Diller is a lecturer and researcher on artificial intelligence at Birmingam University.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X