Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Mathematics, Morality & Machines
William Byers, mathematician, and Michael Schleifer, moral theorist, use their judgement to calculate the improbability of a machine thinking like a human being.
An ongoing debate, much discussed in the pages of this magazine, involves the extent to which the human mind can be modeled by digital computers. Is every human activity potentially programmable? Our response is “Clearly not.” We propose to support our case with considerations from an unusual combination of fields, namely, mathematics and ethics. Contrary to the views of some recent contributors to Philosophy Now (in Issue 72), we do not accept the possibility of ‘moral machines’, nor even of ‘mathematical machines’, if by the latter is meant machines which can replicate what a mathematician understands as ‘doing mathematics’. And morality and math (our fields of study) are not the only areas where an unbridgeable gulf exists between computers and human beings. Our general perspective is that the human mind – even the mathematical mind – is not reducible to the transistors of a computer or robot, nor even to the logical codes implemented by and within the circuits of a computing device.
We’re ready to wager against people like Daniel Dennett, Richard Dawkins and Wendel Wallach (who advocates ‘moral machines’), that we will never see a robot of the sort they believe possible – the sort depicted by Robin Williams in Bicentennial Man. Our reasons for this are not merely the practical problems (i.e. lack of time and resources), as Dennett contends they would be: they are also theoretical.
There are two notorious failures of artificial intelligence. The first concerns language. Attempts at computer simulation don’t come close to the language ability of a two-year old child. We are convinced that programmers will never be successful in getting computers to simulate our linguistic competence. Summarizing our reason very briefly: As Wittgenstein taught us, the actions of very young children are crucial to language mastery; and no computer or robot will have anything like the experiences imitating a child’s early development.
The second important failure in AI is related to the failure to get computers to play the game of bridge at a high level – despite over fifty years of trying. A condensed version of our argument is as follows: AI has succeeded remarkably at chess; since 1999 some computers have beaten the highest-ranked human beings. But unlike chess, the game of bridge requires a large dose not just of logic, but of judgment. We maintain that computers will never play high-level bridge as experienced humans can precisely because of this irreducible element of judgment. Thus the example of bridge illustrates the limits of programmable rules, and the importance of human judgment.
The more competent the bridge player, the more often the decision to bid or not, and what to bid, is based on factors such as ‘table feel’, which encompasses noticing the almost imperceptible pauses and hesitations which human beings display when bidding or not bidding. These nuanced factors are not extraneous to the game but are part of bridge competence. The top players are those who can bring into play these abilities, which go beyond reasoning, inferences, deductions, memory of a system, or the rules. In bidding, as in the other more difficult aspects of bridge (defense and how to play the hand), that extra something which goes beyond reasoning is best captured by the concept of ‘judgment’. Judgment is crucial for bridge, but computers are not capable of judgment, because judgment is not an algorithmic, rule-based procedure. A computer can play bridge, but not the way a human can. Similarly, a computer can do morality and math, but not the human varieties.
Another argument that computing machines are not capable of being moral or mathematical arises from thinking about the role of conflict in human life. It is our position that conflict is primordial and therefore unavoidable: conflict is a fundamental and inescapable aspect of both the physical world and the psychological domain. This is the substance of the famous Buddhist dictum that ‘life is suffering’. It is not that there aren’t situations where conflict is minimal; but such situations are transient. (On the other hand, one might take the position that conflict can indeed be eliminated, and that that elimination is the ultimate goal of much social and scientific activity.) It is our contention that the position that one assumes with respect to the role of conflict will determine one’s position towards the potentials of computing machines with algorithmic processing.
Intellectually, conflict is manifested as logical inconsistency. We expect sound theories, especially sound mathematical theories, to be free from things like ambiguity, logical contradictions, and particularly paradox. The computer is a device which operates within such domains of logical consistency, through algorithms. Paradoxes are precisely where algorithmic thinking breaks down. An algorithm is characterized by having a well- defined temporal order – a series of rules applied in a systematic manner, if you will. The existence of such algorithms presupposes that specific areas of intellectual activity can be made conflict free. Thus, saying that a particular domain can be computerized (morality, for example) implies that the domain can be made intellectually conflict free, in terms of being consistent and free of such things as ambiguity and paradox.
Such attempts for freedom from intellectual conflict are analogous to the financial debacle the entire world recently experienced, which involved an attempt to use mathematical techniques to eliminate risk, and so make a lot of money. Yet in the end risk could not be eliminated, because risk is an intrinsic element in all endeavors, just as ambiguity and conflict are intrinsic to human contemplation. Thus the attempt to build an algorithm to eliminate risk is the same sort of futile activity as the attempt to build a moral machine, or the attempt to write a program that will do mathematics or science in a human, creative manner. Such attempts inevitably end badly.
Morality is concerned with what an individual ought to do in situations that typically involve inner conflict. As adults and as children, we are aware of situations where, for example, we must decide to keep a secret for a friend or tell the truth to someone else. To make our choice we have to compare like and unlike situations and relationships, and we cannot avoid the ambiguities and contradictions involved. Rather we must thrive on these, take inspiration from them, be challenged by them, and make use of them to come up with a solution. This is what judgment is all about. Judging essentially involves looking at differences and similarities where these are not obvious, and making a decision where things are uncertain.
The philosopher/psychologist William James wrote: “There is no such thing possible as an ethical philosophy dogmatically made up in advance … The highest ethical life… consists at all times in the breaking of rules which have grown too natural for the actual case.” (The Will to Believe and Other Essays, p.209) James has here captured the essence of our argument against the possibility of moral machines. Human beings make moral choices in a way which often transcends rules. We use judgment, which is notoriously context-dependent and (as in the game of bridge) is often needed to interpret rules, choose between conflicting rules, or apply rules in varying situations which may be both somewhat similar and somewhat different.
Two thousand four hundred years ago Aristotle noted: “Anyone can become angry – that is easy. But to be angry with the right person, to the right degree, at the right time, for the right purpose, and in the right way – that is not easy.” (Nicomachean Ethics) Yet for Aristotle’s word ‘anger’, we could substitute ‘honesty’, saying, “But to be honestwith the right person, to the right degree, at the right time, for the right purpose, and in the right way – this is not easy.” We could also substitute the words proud (or humble), jealous (envious), trusting, grateful, regretful, and so on. The point is, we must try to be neither too much nor too little, and aim in each case for the morally-appropriate middle position. Furthermore, when processing these emotions, people have the ability to reflect about the future and the past – a crucial ingredient of our moral action and emotional life. For example, we can reflect upon how much or how little regret or gratitude was shown by associates in the past, and take steps to modify our feelings and behaviour in the future. We can even say “Thank you” or “Sorry” years later, if we missed saying it on an earlier important occasion.
The Heart of Ethics
In moral education, the key is helping children develop good judgment. ‘Judgment’ is the essential characteristic of what educators call critical and creative thinking. So training children to make better judgments is different from training them in logic or in general forms of reasoning.
One of the objectives of a moral education program is to help children perceive situations accurately and to make sound moral judgments on that basis. For example, if a teacher were to ask the children of his class if cruelty is wrong, possibly they would all agree. So if they were presented with a situation in which a student was punished, and the children claimed that the student was being treated cruelly, the question would then be whether what happened to the student was in fact an example of cruelty. We can modernize Aristotle at this point with a practical syllogism:
1) All cruelty in wrong.
2) This is a case of cruelty.
3) Therefore this case is wrong.
We may all accept the first premise, that all cruelty is wrong, just as we accept that all love is good, all justice is good, and so on. But the real heart of ethics is in trying to determine the truth or falsity of the second premise. If this is in fact an instance of cruelty, it was wrong. But is it, in fact? Here only sensitivity in our judgment – only the ability to perceive and read the situation in all its complexity – can help us decide.
Everyone is acquainted with the expression ‘rules are made to be broken’. Some rules, however, must not be broken, since they’re needed for protection and safety (a good example is the rule ‘Red lights mean “stop” and green ones mean “go”!’). To many people, this is not a problem; they see rules as permanent, universal, and not to be broken. Children, however, understand that this is not always so. When discussing rules in the classroom, children often wonder whether the given rules are the best rules for dealing with an issue. Rules of a game like baseball can also have a different status to rules for a game the children themselves have devised, such as making up sentences: rules can be temporary, modified, or more long-lasting. They can be regularities describing the way things work; they can also be conventional regulations – for example ‘go on green, stop on red’; or they can be descriptions, such as ‘the sun rises every morning’ or ‘objects obey the laws of gravity’. They may be the rules of the game; either descriptive, as in ‘the bishop moves diagonally in chess’, or prescriptive, as in ‘move your king pawn before your knight’. There are also moral rules: for example, ‘Keep your promises!’ All forms of rules make life simpler. However, we need to apply and interpret them case-by-case: ethical systems and codes of ethics may give us rules and principles for morality, but we need to interpret and apply these rules in various contexts. This is a capacity of judgement beyond even the theoretical limitations of moral machines. So we may have ‘moral machines’ in a sense, but they will not be moral as humans are. However, let’s take the argument further. Machines are not even mathematical in the sense people are.
To make this case, we shall take up two fundamental ideas from mathematics which pervade all the sciences – namely, infinity and randomness. Both of these notions are inherently paradoxical and ambiguous; and let us remember that algorithmic machines hate paradox and ambiguity.
Mathematics has been called ‘the science of the infinite’, and it is certainly true that in a profound way mathematics is characterized by the manner in which it deals with the infinite. But infinity is a strange beast. This is best brought out by considering that the idea of the infinite is seemingly inseparable from a multitude of paradoxes, such as Zeno’s ‘Achilles and the Tortoise’. A multitude of books have been written about the paradoxes of infinity, but the question that interests us here is: What is there about the concept of infinity that seemingly invites paradox? What is it about infinity that makes it so dangerous that the Greeks had to carefully distinguish between ‘potential infinity’ (legitimate) and ‘actual infinity’ (forbidden)? It is interesting in that regard that in-finity is defined not by what it is, but by what it is not – namely, finite. Thus infinity is ‘larger than any number you can think of’ or ‘a distance or size that is without bound’ (such as one possible model for the universe). There is an obvious paradox built into such conceptions of infinity: any de-finition is a finite thing, and so we could say that by definition infinity cannot be defined, for that would be to make it finite… For this reason infinity has often been taken as an attribute of divinity, for the divine can also be thought of as that which cannot be defined or otherwise contained.
This discussion is not mere playing with words. The prohibition against the use of ‘actual infinity’ was extremely important in the history of mathematics. For example, a real number (such as seven, or the square root of two, or pi) may be represented as an infinite decimal. Much more often than not a real number is only expressible as an infinite sequence of digits; and we can generalise this way of expressing the number for the whole set. In this way, every real number is an infinite collection. So if we exclude infinite collections we must exclude the real numbers, and therefore calculus, differential equations, and in this way, much of science and engineering.
Now one might claim that the problem of infinity does not exist anymore – that the Greeks were just not as clever as us, and that progress in mathematics, such as the work of Cantor, has successfully resolved the problem of infinity. This would be a simplistic response for two reasons. The first is that Cantor’s treatment of infinity (in terms of set theory) produced a huge number of anomalies. Thus the paradoxical elements of the idea of infinity emerged as aspects of Cantor’s mathematics of infinity. Dealing with these problems has changed mathematics in a fundamental way. In other words, not only can the problematic aspects of infinity not be eliminated, but, on the contrary, they are what are most valuable. They’re the sort of element that leads to the development of new mathematics.
The second reason for not accepting that infinity has been safely rescued from the domain of paradox, is that Cantor’s approach to infinity is only one of many possible definitions. The intrinsically paradoxical creature which we might call ‘informal infinity’ can be defined in many ways, and is. Thus projective geometry has the idea of the ‘point (or points) at infinity’, which arose from the perfection of perspective in painting, but which has found its way into a great deal of mathematics – complex analysis for example. This is an entirely different idea of infinity from Cantor’s, with its own strengths and weaknesses. And there is no reason why there may not be other equally valid attempts to conceptualize infinity in the future.
Space does not permit us to discuss randomness in the same detail as we discussed infinity. However it is important to our argument to show that randomness also has a paradox built into its very definition. There have been eminent scientists who claimed that randomness is impossible to define. This has not stopped randomness from becoming a key concept in a series of modern scientific theories such as statistics and the theory of probability, quantum mechanics, the theory of evolution, and chaos theory.
What’s the problem with randomness? We’ve tried to capture its essential nature in this parody of an Aristotelian syllogism: ‘Mathematics is defined as the study of pattern. Randomness is the absence of pattern. Therefore mathematics studies (only) non-randomness.’ Randomness, being the absence of regularity or pattern cannot, by definition, be captured by systematic thought, since what can be systematized (ie, what can be incorporated into a logical theory or computer algorithm) cannot be random. Thus the random numbers produced by your calculator or computer are not random at all: they are produced by algorithms, so-called ‘pseudo-random number generators’ which merely mimic certain aspects of randomness through the use of a deterministic (non-random) procedure.
There is an element of randomness which actively resists being defined. Another way of saying this is that like infinity, randomness can only be given provisional definitions. Some of these definitions will inevitably be uninteresting; but some will capture some profound aspect of randomness, and may give rise to whole new areas of mathematical and scientific research.
Infinity and randomness are not the only mathematical ideas to contain conflicts, paradoxes and the intractable twists we have been discussing. The very names that have been given to various number systems – negative numbers, irrational numbers, imaginary numbers, complex numbers – show the problematic origins of these concepts. Significant mathematics is born out of conflict. It is only in retrospect, by virtue of a retroactive rationalization, that people convince themselves that mathematics is merely a logically consistent process which could be generated by a computer program. We could reduce ‘mathematics’ to some computer activity that contains mathematical content, but this activity will not be real mathematics precisely because the conflictual elements, the elements of ambiguity, contradiction, and paradox, will have been removed. In the same way, the ‘morality’ of ‘moral machines’ will be a pallid reflection of the morality of human beings.
© William Byers & Michael Schleifer 2010
William Byers is a Professor in the Department of Mathematics and Statistics at Concordia University in Montreal. He is the author of How Mathematicians Think: Using Ambiguity, Contradiction, and Paradox to Create Mathematics (Princeton UP, 2007) and the forthcoming The Blind Spot: Science and the Crisis of Uncertainty (Princeton, 2010).
Michael Schleifer is a Professor in the Faculty of Education at the University of Quebec in Montreal. He is the author of Talking About Values and Feelings with Children (Temeron Press, 2006), Mutual Respect With Teenagers (Detselig, 2007), the co-editor of Religion, Science and Education (Temeron Press, 2009) and the author of the forthcoming Children and Death: From Biological to Religious Conceptions (Cambridge UP, 2010). Please visit www.michaelschleifer.net.
• This article is in memory of Gerald A. Cohen (1941-2009). Jerry Cohen, Fellow of All Souls College, Oxford University, was a lifelong friend of ours. In his book, If You’re An Egalitarian, How Come You’re So Rich? he describes the communist summer camp in the Laurentian mountains north of Montreal which the three of us attended as children. We have enjoyed philosophical discussions with Jerry for over fifty years. Jerry probably would not endorse all the details of the above argument. Nevertheless, we feel his presence and guidance, along with an approving smile at our collaboration.
• Russell’s Set Paradox: Consider a set whose members are sets that are not members of themselves. Is this set a member of itself, or not? Paradoxically, if it is, it isn’t, and if it isn’t, it is…
• Zeno’s Achilles and the Tortoise: It is impossible for Achilles the fast runner to catch up with the tortoise ten yards ahead of him because, as Aristotle says in the Physics, “In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.”