×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Editorial

Talking to Ourselves?

by Rick Lewis

What common thread links robots, ape language experiments and Kasparov getting beaten at chess by Deep Blue? Maybe it is that each can tell us something about ourselves. This issue kicks off with an article describing attempts to teach sign-language to chimpanzees. The author concludes that the results of these experiments should make us change our ideas about the nature of language. Edward Ingram uses an amusing fantasy about a world inhabited by robots to make interesting points about the nature of perception in humans. Roger Caldwell discusses Dan Dennett’s view that we are conscious robots: again, a consideration of robots casting more light on the nature of humans, this time on the nature of consciousness. This issue may appear to be full of articles about animals and robots, but really it is full of articles about humans.

According to Socrates (in Plato’s Protagoras), one of the key commandments for people aspiring to be philosophers should be “Know thyself!” Several areas of philosophical enquiry fall under the general heading of ‘knowing ourselves’, but none more so than the topics represented by the three articles above: the nature of language, the problem of perception, and the mind/body question.

Regardless of whether or not we can talk to chimpanzees, the sad fact remains that we’ve yet to encounter a non-human species with which we can have a really decent conversation, about subjects more abstract than bananas. Consequently, no viewpoint is available to us other than the human; we can only talk among ourselves. Being a gregarious species, we scan the skies looking for extraterrestrial civilisations, and maybe one day we’ll find one. To date, however, the Search for Extraterrestrial Intelligence (SETI) set up by the late great Carl Sagan, with its arrays of radio telescopes, has had no more definite successes than the parkaclad UFO enthusiasts with their binoculars.

Journalists reporting developments in the search for alien life routinely say how its discovery would have profound philosophical implications for humanity. The usual implication is that this is because it would force us to recognise that we weren’t unique in our possession of consciousness. It would be another step in the long sequence of steps leading us away from the old picture of human uniqueness, of us being at the centre of the universe, created by God in His own image, on an Earth about which the rest of the cosmos revolves. However, there may be another, more constructive, way in which finding alien life (intelligent life, not just bugs and microbes) could affect philosophy.

The way we perceive the universe depends (according to many) on the nature of our perceptual apparatus (our eyes, ears etc.) and on the conceptual categories we acquire from the society in which we are reared (see Issue 16, ‘Ontology for Beginners’). Many philosophers believe that we can never know the world as it is in itself, only the interpretation our minds put on the raw sense data. But little green men from space would have different perceptual apparatus and a different set of socially-generated categories. By comparing their view with our own, we might be able to understand the world better.

Meanwhile scientists are trying to create nonhuman conversational partners closer to home: intelligent computers. Maybe Carl Sagan was pointing his radio telescope the wrong way, and would have had more luck moseying around the labs of IBM. Artificial intelligence (AI) research has been much in the news recently, partly due to the victory of IBM’s Deep Blue over the human world chess champion, Garry Kasparov. It is interesting to note that the only widely-accepted criterion of artificial intelligence, the famous Turing Test, is based on the ability of a computer to converse with a human being in such a way that the human doesn’t realise that the computer is indeed a computer. But would the creation of AI on these lines really allow us to compare notes with intelligent entities looking at the universe from a distinctively non-human viewpoint? Or wouldn’t AI machines inevitably embody our own methods of perception, our values and ways of thinking, as they would be programmed by humans? Isn’t the point of the Turing Test to encourage the creation of imitation human beings rather than ‘intelligence’? Danny Kodicek made this point well in his short story ‘Humanity’ back in Issue 12. If this is correct, then we may be better off talking to the chimps after all.

Many thanks to all those readers who sent in contributions for our proposed special issue on the meaning of life. We’ve had to postpone that project for a little while, but we really will produce it in two or three issues time, so we’ll hang on to all your contributions until then. Someone here suggested that it is in the nature of things that an understanding of life’s meaning always seems just ahead of us, but never quite in our grasp, and that the best metaphor for this would be simply to print a little box in every issue of Philosophy Now saying “NEXT ISSUE: We Reveal the Meaning of Life!”

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X