You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please SUBSCRIBE!

Moral Moments

Print Print

Email Email

Email Discuss

Facebook Twitter Reddit Google+
StumbleUpon Pinterest Delicious Digg

Man in the Middle: Animals, Humans and Robots

by Joel Marks

Humans used to be thought of as occupying an existential space between animals and gods. Clearly our bodies display our affinity to animals, while our minds possess intellectual and moral capacities seemingly beyond the reach of physical objects. This even led some to identify us with either animals (we are mammals) or gods (we are immortal, immaterial souls). But today, technology has replaced the divine, and computerized robots have replaced gods as one of the extremes between which we humans may represent the mean. Today it seems plausible to postulate that we are material mechanisms of a high order, who have evolved from common ancestors with the other animals (and plants) and may eventually merge with our digital creations to become ‘transhuman’ cyborgs.

One great difference between the relationship of humans to gods and that of humans to robots is that the gods were taken to be the standard of comparison (we were made in God’s image), while with robots, man is the measure. We aspire to create robots with human-like attributes, even though their capabilities may in some ways exceed ours (e.g., computers calculate faster and more accurately than their creators). Interestingly, this makes our relationship to robots more like that to animals than to gods, for humans have a tendency to view other animals through the human lens too, namely, as lesser or ‘subhuman’ (e.g., what matters is that they can’t calculate at all). This is both an anthropomorphic and anthropocentric distortion, of course, since lions and tigers and bears are not lesser anything, but only ‘perfect’ specimens of themselves, having evolved to function optimally in their respective ecological niches.

We begin to see, therefore, that the various efforts to understand humans’ place relative to nonhumans could have profound moral implications. Insofar as we distance ourselves from other biological or electro-mechanical entities, we tend to exclude them from the ‘circle’ of morality. In fact there are two moral circles; and this fact has different ramifications for animals and robots, for they would arguably belong to different ones.

One of the moral circles is that of moral agency, the set of beings who bear some degree of moral responsibility for (some of) their behavior. Nonhuman animals, at least in the wild, are seldom considered to be moral agents. A hawk isn’t doing anything wrong by ripping the head off a live chipmunk prior to devouring it. However, it is quite plausible for animals to be thought of as moral patients, that is, beings who merit moral consideration by moral agents, such as us humans. Peter Singer has famously reintroduced W.E.H.Lecky’s notion of an expanding circle of moral regard that would encompass other animals. That, then, would be a circle of moral patiency. Robots do not obviously belong to this circle because they are not sentient. If they can’t feel, we don’t have to worry about hurting them. Whether they might someday be sentient is debatable. But what is not debatable is that they are capable of some sort of agency, and so they could conceivably claim a place in an expanding circle of moral agency – a point emphasized by Wendell Wallach and Colin Allen in their new book, Moral Machines (Oxford, 2009).

We must keep these two circles of morality separate in our thinking. To fail to do so could actually lead to a contraction of both, with undesirable results. The circle of moral agency would contract (or at least not expand) if, as some have argued, moral agency required moral patiency; this would make sense, for example, if, as Hume maintained, sympathy is essential to moral responsiveness. Without the capacity to feel, robots couldn’t feel sympathy, and according to Hume would therefore have no genuine impulse to act morally. But that requirement would be undesirable if we wanted to assure sufficient self-restraint by autonomous automata, regardless of their affective capacity, since holding them responsible for their behavior, perhaps the defining feature of moral agency, might be one of the most effective means of doing so. Meanwhile, the circle of patiency would also contract if the (aptly-named) contract theorists, such as Hobbes, were correct that moral patients must be moral agents. Such thinkers claim that morality exists only among equals for their mutual benefit, giving them corresponding rights and duties to one another. If nonhuman animals could not have duties because they are not moral agents, they would have no rights and hence not be moral patients either.

Contractualist reasoning strikes me as fallacious since I see rights as emanating from certain inherent qualities of a being, such as sentience, relative to certain inherent qualities of another being, such as rationality. Hence, I believe, nonhuman animals are moral patients even though they might not have the requisite degree or type of rationality to be moral agents. Meanwhile, could robots become sufficiently sentient to count as moral patients? Wallach and Allen speculate on the possibility of what we might call an ‘artificial rights’ movement analogous to the animal rights movement. But they don’t focus on that frontier, seeing it as a problem still far in the future. One of the strengths of their book is its even-keeled approach to technological prospects; they are much more concerned about what is likely than what is merely imaginable. Thus, it may seem reasonable to position human beings at the intersection of nonhuman animals, who are certainly moral patients but possibly not moral agents, and nonanimal robots, who could conceivably become moral agents but perhaps not moral patients. Only humans are unequivocally both.

But rather than overemphasizing the inquiry into the nature of human nature, we might in the end find it more compelling to ask what exactly makes something a moral agent or patient. We should keep in mind that in these two moral circles the center has no special significance. The circles merely represent sets, so the question is: what makes for membership in the one or the other? In other words, thinking about moral agency and patiency in the light of nonhuman animals and robots could lead to a Copernican revolution in ethics, whereby humans lose their privileged position at the center. The circles are expanding, but not necessarily outward from us.

So, for example, how could we tell if robots were moral agents? Isaac Asimov popularly promulgated ‘laws’ for robots prohibiting their harming of humans; but would robotic laws of behavior be moral laws? If they were intended as design mandates, then from the standpoint of the robot they would be more like laws of nature, which one cannot help but ‘obey.’ Would that disqualify them as moral, and hence the robots as moral agents? If you see freedom of choice as essential to morality, the answer must be yes. But confronted by a wholly programmed robot who unfailingly behaved as a perfectly moral human being would, might we not appear simply prejudiced (or ‘humanist’) to deny it moral agency status? “Here I stand: I can do no other” is not obviously to be spurned as a genuinely moral stance.

We might also ask of Asimov: How relevant to morality are robotic laws that govern their behavior toward humans? Might not the first question be how robots would relate to one another? The very idea of observing or testing a robot for moral agency could be biased if we limited the criterion to its treatment of human beings. This would also apply, other things being equal, to the hawk and the chipmunk. Maybe what we should really be investigating is the hawk’s interaction with other hawks in order to see if it does, after all, exhibit moral agency.

We might then ponder further whether there could be degrees of moral agency instead of only all or nothing membership in the set of moral agents. And one measure of degree could be precisely how large a membership exists in one’s circle of moral patiency. Do humans become more moral to the degree that we accord moral consideration to robots and hawks?

Such questions only multiply. Finding the answers would probably involve both empirical investigation and theoretical explanation as we revised our very notion of morality in the light of more careful and extensive observations and the coming into being of new entities. The empirical part would be complicated by the ethical consideration presumed due to animals and robots, and not only as to how or whether we should be testing or even observing them but also whether we should be bringing new types into existence, such as machines that could experience pain. There are many good reasons, therefore, for collaboration among ethologists, roboticists, and ethicists – oh my!

© Joel Marks 2009

Joel Marks is Professor Emeritus of Philosophy at the University of New Haven in West Haven, Connecticut. His website is


This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy.