×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Machine Morality

Machines and Moral Reasoning

Thomas M. Powers on how a computer might process Kant’s moral imperative.

Philosophers have worried about how to compare humans and machines ever since Alan Turing proposed his famous ‘intelligence test’ in his 1950 Mind article ‘Computing Machinery and Intelligence’. If the successful imitation of a human conversation is one sufficient condition for intelligence, as Turing thought, just what else might demonstrate intelligence? How about playing a good game of chess, or navigating a cluttered room to sweep up? If these and similar activities are to count as signs of intelligence, then we humans don’t need to procreate in order to bring more of it into the world. Thanks to companies like IBM and iRobot, intelligence can be purchased in a box – or at least, in an organized jumble of plastic, metal, semiconductors and software, brought to life with a bit of electricity. (Dr Frankenstein, you have competition!)

But of course there are deeper worries about the implications of intelligent machines, and if there’s anything philosophers like to do, it’s worry deeply. Turing and the computer engineers who followed him have invited a kind of conceptual erosion of the human person, it is feared. All kinds of characteristics that were once thought to belong exclusively to humans might now be engineered into all sorts of machines. For sure, intelligence was a significant characteristic to be lost to the machines, but there are others to follow. A quick look through the literature on AI (artificial intelligence) brings with it promises of perceiving, finely discriminating, even empathic machines. (Sober philosophers will insist on scare quotes around all these adjectives, but it is instructive that the engineers do not.)

Perhaps our outlook should not be so bleak: humans aren’t competing with machines – not yet, at least. Surely there’s nothing wrong with a little more intelligence in the world. If computing machines can take over the tasks of directing airline traffic and assessing threats to public health, then we’ll not have lost (much of) anything to them; rather, the machines will take on an increasing share of our practical and civic burdens. But the question remains: Isn’t there something only humans can do?

The Moral Program

One plausible answer to that question would be to insist that only humans can engage in moral reasoning. For followers of the Kantian tradition in moral philosophy, this not only marks out a dividing line between us and machines, it also points to what affords humans a special status as ‘ends-in-themselves’ – thus providing a reason for our not treating humans (including oneself) as mere means. Genuine moral autonomy, including free will, is the key to this special status, and it is a characteristic that machines will never have, since their behavior is tied to the strict causality of their programs and the logic gates of their transistors and microprocessors. Thus you may do what you will to your computer – it’s a mere means.

Still, even for a Kantian such as myself, there are problems with this answer to the question of human uniqueness, giving a kind of head-in-the-sand feel to the worry about the ‘conceptual erosion of the human person’. First, this answer may represent a kind of phyrric triumphalism, since it is almost certain that computers will interact increasingly with humans, and it would be reassuring if we could program something like moral reasoning into them. Second, it is not altogether clear that Immanuel Kant’s conception of moral reasoning is opposed to the conception of mechanical intelligence that was assumed by Turing. That is, Kantians generally think that morality consists of constructing and following rules, and precisely what a computer does is follow rules. Kant himself insisted that moral reasoning was entirely ‘formal’; had Turing’s test involved moral reasoning, perhaps they would not appear so far apart. Maybe an AI computer would be able to generate rules too, and follow them in such a way that its behavior appears principled. We should not be too quick to assimilate moral rules to software routines, but let’s explore the prospects of machine moral reasoning in a little more detail, starting with some simple cases.

Suppose we wanted to make a computer for the banking industry that refrained from stealing. A programmer might construe ‘stealing’ as ‘the transfer of money from one person’s account into another’s without the permission of the first person’. One way to achieve such a constraint is to program the computer without any routine whatsoever to transfer money. Yet surely this is vacuous in moral terms: for it to be really moral, we want a machine that cando bad things, but refrains from doing them. And the example also makes clear that the inaction of a moral machine would be only half the story. We want a machine that can do good things as well.

So now, imagine a computer that distributes a good – a scarce, life-saving medicine, say, to several differently situated patients – according to a probability-adjusted cost-benefit rule designed to save the most patients. Such activity might be considered Kantian, but the computer could also be seen as implementing a Utilitarian rule to maximize total social happiness. (Some rules are compatible with several theories on the nature of morality, it seems.) We could well imagine that the computer would be better at calculating complex cases with known probabilities than most humans. In addition, we might get more impartial decisions about the distribution of the medicine, as the computer is shielded from irrelevant factors such as which patient the doctors like better, or which are relatives of the hospital administrators. Still, there are reasons to think that the computer would be stumped in applying the cost-benefit rule. For instance, would mere recovery of the patients matter most, or is it more important how long each patient is likely to live? Can the computer take into account the amount of suffering to be endured by the patients that do not (or do) get the medicine, and how is this weighted against the benefit of recovery for those likely to recover? If statistical probabilities could not be assigned to the patients’ recoveries, what would the computer do then? Would it fail to yield a result, and hence not distribute the medicine to any of the patients?

Maybe our machine just isn’t Kantian enough. A more explicitly Kantian machine could have a preprogrammed database of permissible maxims. (A maxim is a kind of plan of action suited to a particular circumstance.) Maxims of moral action are what Kant seems to have thought can be generated from the first formulation (the ‘formal’ version) of the Categorical Imperative. The Categorical Imperative, Kant’s basic moral law, says ‘we must act only on maxims which we could will to become universal laws’. One condition of a maxim becoming a universal law is logical consistency with other universal laws. This looks like a non-starter computationally; we couldn’t check any maxim unless we already knew what the other universal laws were! But suppose we could begin with one universal law and then build the database of permissible maxims by an iterative process, generating them from the one law we had. Once we had a large enough database we could instruct the computer to act on the following rule: ‘Only do action a if it appears in the database of permissible maxims’.

For this to work, first we would require a computer with the kind of intelligence that could aptly discriminate one circumstance from the next, for very few actions are permissible in every circumstance. (The performance of an emergency tracheotomy is a perfect example.) Second, we might never program a sufficient number of maxims into the database. We would almost certainly be sending our machine out into the world unprepared to meet some circumstances, and hence incapable of computing the morally permissible way to act.

The programming solution to this problem is to build into the software the ability to recognize new circumstances, and to compute new permissible maxims by applying the test of logical consistency with prior universals laws. And to go even further in the direction of Kantian moral theory, we might expect such a computer to be able to compute the classes of forbidden and obligatory maxims through basic logic (if doing action a in circumstance c is not permissible, then the maxim to do a in c is forbidden; if failing to do a in c is not permissible, then the maxim to do a in c is obligatory).

While there are still some formal hurdles to constructing such a machine (I won’t go into the more arcane logical problems here), it’s worth asking what we’d have if such a machine were constructed. According to our hypothesis, prior to undertaking any action whatsoever, the machine would have to check to see if an action was permissible, forbidden or obligatory. This is something even the most careful human moral reasoners do not do. Indeed, the vast majority of our actions we take to be non-moral in nature, and for the most part not subject to moral evaluation. Our intuitive ‘database’ of permissible actions is huge, although this may indicate our moral laziness. For example, how often do we ponder whether to drive to work or take public transport? If there are obligations to reduce our carbon footprint, or even to reduce traffic congestion, this choice is certainly a moral one – yet most people fall into one or the other habit without reflection. The machine would not be subject to such unreflective habits.

Triumph of the Moral Machines

It is interesting to evaluate such a hypothetical machine in light of the possible ‘erosion’ of central human characteristics like intelligence, empathy, and sociability. The more exactly our moral machine implements such a formally structured Kantian morality, the less it would behave like a human in some relevant respects. It would have no need for expressions of regret, moral conflictedness or any act of conscience, since everything it did would fall neatly under the categories of moral maxims that we’ve programmed into it or which have been logically derived from those we’ve programmed. It would not suffer from weakness of the will, because it would be programmed to always act according to its moral categories.

These behavioral characteristics would have to be considered a moral success from the standpoint of the Kantian programmer who created such a machine. But what about the machine’s conception of moral success, supposing it could have such a thing? It seems it would have to tie moral success to the performance of actions according to its moral categories. So, much like a behaviorist social scientist, the moral machine would define ‘moral agent’ from the observable behavior of candidates. Following Adam Smith, we might call the machine an ‘impartial spectator’. Such a machine could well see human action as fickle, unreflective and morally lazy. It might even turn the tables on the Kantians, and seek to establish the moral status of the ‘means-in-themselves’ – the moral computers. After all, ex hypothesi, it is this formal and mechanical moral intelligence that has achieved such moral success.

These hypothetical moral machines pose quite a challenge for the Kantian conception of autonomy. For Kant, the notion of autonomy plays two major roles in his metaphysics and moral philosophy. It serves as a kind of resolution to the ‘antinomy’ [irresolvable contradiction] of free will and causal determinism which is a central problem of his first two Critiques. And as I mentioned earlier, autonomy lies at the basis for Kant’s claim that humans have special moral status and intrinsic value. For Kant, practical freedom or autonomy simply is the ability to act according to a moral law one generates by one’s reason. But what if this ability is best executed by a machine strictly following a formal procedure for deciding what it ought and ought not do? Will the Kantian machine be a better moral agent because its behavior is strictly causally determined by its program through its logic gates? Kantians have always supposed that formal moral reasoning and the special moral status of humans pulled in the same direction, so to speak, because free will (the ‘uncaused causality of the will’) was something that mere means – mere machines – could never have. Now it seems that the ability to reason morally would be better seen as independent of our autonomous will, not a consequence of it.

Perhaps the upshot of this thought experiment is to heed the warning to be careful what one wishes for… it may come true. Therefore, we should be very careful about what abilities we program into computers, and what responsibilities we assign to them. In the end, if the machines are coming, it is humans who are constructing them.

© Thomas M. Powers 2009

Thomas Powers is Assistant Professor of Philosophy and Director of the Science, Ethics and Public Policy program at the University of Delaware.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X