×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Machine Morality

Will Robots Need Their Own Ethics?

Steve Torrance asks if robots need minds to be moral producers or moral consumers.

Robots present an interesting double picture. We can see them simply as our tools, as things we use. Alternatively we can see them as agents, as embryonic persons. The ethical implications of robotics look very different depending on which of these views we adopt.

Considering them as tools akin to cars or factory control systems, ‘robot ethics’ solely concerns our responsibilities towards the human community and the environment in our use of robot technology. This is a debate already covered in the extensive literature on the ethics of technology and IT. To some people that’s the end of the argument. But if instead we view robots as potential agents or persons, with a degree of autonomy that approaches or may even exceed human autonomy, then ‘robot ethics’ depends upon the notion that robots might in some sense be moral agents in their own right.

Conservative Versus Radical

On the most conservative view, robots’ lack of biological structure prevents them from attaining genuine moral status; even if artificial intelligence develops to a high degree, the kinds of agency that robots could realize will never qualify them as members of the moral community. On this view, robots are just mechanisms, tools. (Supporters of this view may take a different approach to organic artificial agents.)

On a more radical view, robots must seriously be considered to be not merely tools, but subject to ethical considerations in their own right, at least in principle. They have intrinsic rather than just instrumental status. On this view there is no reason why electronics versus biology should make a difference to ethical status: it’s rather a matter of design or functionality.

There’s a reason for taking this radical view seriously, at least as a pragmatic stance. The likely proliferation of autonomous robots in factories, on battlefields, in public places and in homes means that robots may soon be occupying roles and making decisions which if taken by humans would be seen as having deep moral importance. These robots may be crude beasts, and may resemble humans only in very superficial ways. Alternatively they may resemble humans in certain subtle and important ways, while not at all in their superficial respects. For example, they may get about on wheels rather than legs, but have powerful perceptual and decision-making faculties which allow us to put them in charge of situations previously the preserve of highly responsible professionals.

On one version of this more radical view, we need not just to put external controls on robot actions, but also to develop systems of internal (ie self-) direction in the robots. We might as well call these systems ‘ethical codes’ and talk of the robots as having moral responsibilities, because that’s how we’d talk about the equivalent case with humans. If so, then building ‘ethical responsibility’ into robots will be not just desirable but imperative.

Further, it may be argued, the complexity of artificial agents will inevitably grow rapidly. We may increasingly see them as having their own needs, interests and preferences, so that we would have duties of ethical concern towards them, alongside the robots having duties of ethical concern towards us. This in turn will require more full-blooded moral and social relationships between artificial and natural agents.

Here then are two positions: conservative, seeing robot ethics as a theoretical fantasy; and radical, seeing it as a pragmatic necessity. On the face of it they seem incompatible, but are they? We may grant that at least for the foreseeable future robots won’t approach humans in ways which would allow them to be full members of the moral community, as ethical agents in some proper sense. All the same, there’s still an urgent technical and moral imperative on us to arrive at ways to ensure that robot behaviour adequately approximates to morally-acceptable human behaviour – that is, to develop what can be called a ‘ paraethics’ for artificial agents.

In any case this is what I shall argue. Even if robots are sealed off from participating in ethics in the strict senses that some moral philosophers define, there nevertheless is a form of ethics in which they could and should participate.

Two Aspects of Moral Agency

There are two different but complementary aspects to being a moral agent. There are ethical agents as deliverers or producers of moral action; and ethical agents as recipients or consumers of moral action. If a robot is considered as having certain duties or responsibilities (eg to protect other members of the moral community) then it is being thought of as a moral producer. On the other hand, if we think of a robot as having certain rights or needs which other members of the moral community ought to respect, then it is being thought of as a moral consumer. For a moral producer the relation of obligation goes from the agent to the moral community. For a moral consumer, the relation of obligation goes in the opposite direction.

The two don’t automatically go together. We can’t automatically assume that if x is a moral consumer then x is a moral producer, or conversely. We might consider certain sorts of animals as being moral consumers – for instance, if we think we ought to avoid being cruel to them – while not necessarily considering them moral producers. We may think chickens don’t have moral duties or responsibilities, even though we have duties or responsibilities towards them. On the other hand, certain kinds of robots may be moral producers without being moral consumers. They may have enough capacity for autonomous action for us to regard them as having moral responsibilities or duties towards us, while we may simultaneously think that as totally non-sentient creatures they can’t make any moral demands upon us, such as any duties of care.

Criteria For Moral Status

What might be required for robots to qualify as moral agents ineither sense? Many kinds of things might be considered crucial: rationality/intelligence; autonomy; free will; awareness; self-awareness; imagination; emotional responsiveness; empathy; purposiveness… and the list could go on.

Some of these qualities are more readily attributable to robots than others. For example it’s familiar to hear talk of robots as being intelligent, or as autonomous agents – perhaps we can even conceive of them as having a degree of self-awareness or imagination. But some of the other items in the list don’t seem easily applicable to robots, however highly developed – what about empathy or free will? And it may be argued that you couldn’t have a ‘genuinely’ intelligent, rational, autonomous or self-aware entity unless it also had a mind with experiential contents and some degree of emotional response: at least a primitive experience of being conscious.

There has been much discussion about whether there could be a conscious electronic creature. Part of what makes this discussion so highly charged is that important issues about the moral status of robots seem to hang on the conclusion.

Some would argue that within the limits of foreseeable technology an electronic agent can’t be ascribed any mental states at all. Any mental capacities a robot appears to display are on this view merely pseudomental capacities – the robot is merely simulating thinking, intelligence, autonomy or whatever. And to this may be added a rider: without any mental capacities you can’t expect a robot to have any moral status – either that of a moral consumer or of a moral producer.

Of course there are other, less extreme positions. Another view might be: true, a robot can’t be conscious – and therefore it can’t be a moral consumer. We can’t, strictly, have duties of care or respect towards robots, although we may value them highly in the way we may value a sleek sports car or an exquisite painting. But non-conscious robots can still be considered moral producers because even if they don’t have the kind of autonomy that humans have, they at least have a kind of functional or operational autonomy, in that they have the capacity to make decisions which may be of considerable importance.

Robot Soldiers: A Case Study

Consider robots deployed in combat. Battlefield robots may be enabled, or even required, to make decisions to kill enemy combatants without referring back to a human commander. There is no reason in principle why such robots might not have decision-making capacities functionally equivalent to those of human soldiers, or even why they shouldn’t make more reliable decisions than those routinely made by human personnel. For instance, it has been claimed that robot soldiers will be designed to distinguish hostile forces from civilians more accurately than human soldiers; or that their decision-making may be less ‘clouded’ by emotions than human soldiers. Both claims might be greeted with some scepticism.

It might be that robot soldiers carry a moral responsibility for decisions that they make in combat situations, just as human soldiers do. However, if designed along current technological lines, such robots will be totally non-conscious. So this is a case where there would be moral production without moral consumption. There are many less dramatic cases that illustrate the same point. Robots or computerised decision-support systems may be put in positions where they must make decisions regarding the health or welfare of many people in hospital, or where they make financial buy/sell choices that have large-scale effects on national or international economies. In such cases we may wish to apply certain kinds of moral appraisal to their actions – to consider them as having responsibilities as moral producers.

Proliferation

In the future, robots may proliferate much as other successful technologies have. How would we cope with their arrival in large numbers, perhaps creating a new kind of ‘social stratum’? Indeed, how should we define the ‘we’ doing the coping? If the possession of biological constitution, conscious feeling and/or other special properties are requirements for entry into the world of ‘persons’ and of ‘society’, then non-sentient robots, human-like as they may be, will always just be tools.

Many might doubt this. But even if one took this restrictive view, it would surely be wise to develop a paraethics for any large population of personoid robots that may emerge. We would have expectations of their behaviour similar to our expectations of the behaviour of humans. Therefore they would need to operate seamlessly within our moral discourse and engage in appropriate ethical interactions – even if they’re conceptually unable to have genuine moral responsibilities, that is, be genuine moral producers. This has implications for robot-designers and manufacturers.

There will also be a host of particular requirements special to the different fields in which such robots will – in many cases, already do – operate: warfare, policing, medical care, home care, education, scientific research, industrial production, etc. Each area will need a carefully formulated and implemented set of paraethical ‘professional precepts’ or guidelines. These requirements will cover not just how non-sentient robots may treat humans and other sentient beings, but also how they may treat themselves. There will be debates about the extent to which they should be hardwired versus making them matters for autonomous decisions by the robots. Manufacturers will be expected to incorporate such paraethical constraints into their designs before largescale production. And we will need to educate ourselves into how to treat robots: responsibly but guarding against overhumanizing or overempathizing attitudes.

Conclusions

There may be a strong pull towards saying, conservatively, that electronic robots will remain non-experiencing creatures, however complex and high-functioning their capacities, and however closely some may come to resemble humans. If so, then as non-sentients, perhaps, they cannot directly be either moral consumers or producers.

But that very restrictive view may be misguided; many would say it is. Even if we accept it, a possible future world with robots all around us (and around each other) is fraught with too many dangers for us to leave the question of robot ethics there. I have suggested that we have to develop a special form of ethics for use by and towards robots – a paraethical range of quasi-moral expectations about how they treat us and themselves, and about how we treat them. Otherwise we may find any such techno-future much harder to deal with than all the previous techno-futures that have already arrived.

© Prof. S. Torrance 2009

Steve Torrance is Emeritus Professor of Cognitive Science at Middlesex University, and a visiting senior Research Fellow at Sussex University.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X