×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Future Shocks

Robot Rules!

Brett Wilson judges the case for laws for robots.

Some time in the near future your cat Tybalt, while sunning himself on the lawn, suffers a hair-raising experience which scars him for life. The first you know about it are the cat calls that alert you to a standoff between feline and machine, just before you glimpse Tybalt haring it for the catflap. Examining your poor moggy you realise that next door’s automated lawnmower, after forcing its way through a gap in the fence, has mistaken your cat for an unruly patch of couch grass, giving him the fade cut he never wanted.

You decide to sue. Poor Tybalt! His coat will never be the same; and there’s the PTSD to think about. The case seems cut and dried. Your lawyer, though, face like a prune, sighs and tells it straight. Things have changed, he says. The problem is not whether to sue, but who to sue. In the past you might have claimed that the manufacturer had overlooked a dangerous flaw in the lawnmower, or worse, seen one and ignored it. Tybalt would be rolling in catnip. Alternatively, your neighbour might be at fault if they had used the mower inappropriately, just like if they set off a firework and burned your shed down, or drove their car into your 4x4 while intoxicated. Tua culpa. But neither of those situations applies anymore. You see, he explains, your machine is a snowflake. Not the atmosphere-susceptible delinquent of teenage parlance, he qualifies. He means an actual snowflake.

Your lawyer’s argument goes like this. In the case of the ice crystals which make up snowflakes, the packing arrangements are simple, but the process in which they are created gives rise to a wealth of forms. Snowflakes display a close but not perfect twelve-fold symmetry, and for formation to occur, both humidity and temperature need to be right. First, nucleation occurs around a dust particle floating in a cloud. The particle develops facets, favouring some surfaces more than others. This creates a small hexagonal prism whose corners sprout arms. Plates may grow on them. Each arm experiences a similar history and so has a similar form; however nothing is synchronising the growth of the arms, so contrary to popular belief, most snow crystals are not symmetrical. And as the crystals move through different temperatures, intricate kinds of growth occur, forming unique patterns.

In a similar way, your lawyer explains, the lawnmower manufacturer created a self-learning robot, one that could adapt, but which would grow in such a way that its responses would be unique. It can no longer be said to be the responsibility of the manufacturer, because the builder could never anticipate quite how it would turn out, given the near infinite number of forms its processing might take. It says so in the warranty, too. So where does responsibility lie here?

Robot Responsibilities

Humans have been getting used to responsibility for thousands of years, and the concept sometimes forms the basis of moral arguments. Some philosophers, such as Jean-Paul Sartre, claim that if you are free, then you are responsible (Being and Nothingness, 1943). The first thing to note about Sartre’s freedom is that we can’t choose to be not free, or as Sartre expresses it, we are ‘condemned to be free’. Even choosing not to act is still a choice. So we are responsible for how we respond to and what we make of our world. Sartre calls refusal to own up to our responsibility ‘bad faith’. According to Sartre, if our consciousness were only aware of the present, we could not escape the present, nor choose. But what we imagine, want, or intend, while not yet actual, nor a thing (note the nothingness in his book’s title) is the basis for our choices, releasing thought from being trapped in the present. Furthermore, our choices, derived from nothing, are never necessary; so we cannot blame others for the situations we create.

But machines don’t need free will to learn, just as snowflakes don’t require free will to be unique. The law only applies to human decision-making, at least it has so far, and perhaps this is a recognition of free will. Yet with increasingly sophisticated robots, the responsible human creator is separated from the outcome by a chain of actions we can no longer understand, as happens when we design robots with some degree of artificial intelligence. We are forced to unpick the relations between responsibility and action.

Robert Oppenheimer, one of the creators of the first atomic bomb, opposed the further development of nuclear weapons, having seen the devastating consequences of what he helped create. He argued that scientists should be aware of the consequences of their discoveries even though the end product might be far removed from their initial decisions. This distance between our actions and the consequences of our actions is ever present in our world, for instance when we burn fossil fuels knowing that this will probably lead to the loss of island nations as sea levels rise – or even using a motor car, given the inevitable toll on human life. But what’s happening with the autonomous robot is not just difficult to grasp; the chain of responsibility is broken, as we can no longer trace the path from agent to consequence, even in theory.

If consciousness could be present in autonomous machines, then we could claim they are morally responsible – see Sartre’s argument linking consciousness and responsibility. However, to develop this idea must involve showing how machines can transition into consciousness from nonconscious states. We cannot plausibly claim that simple machines such as lawnmowers are already conscious, and if they do not start that way, how do they change?

It has been argued by Antonio Damasio that human consciousness developed from and is closely related to homeostasis, or body maintenence. To give an example of homeostasis, when you are thirsty, you seek water. Damasio argues that consciousness could have developed from such mechanisms (see also ‘Could A Robot Be Conscious?’, Brian King, Philosophy Now #125). But though homeostasis may be necessary for consciousness to have evolved, consciousness is unnecessary for homeostasis, so it’s not clear what would be necessary for artificial consciousness to form.

One obvious account of artificial consciousness is in terms of increasing processing power or complexity. Some AI proponents argue that the information processing threshold for consciousness is the same as the number of operations per second in a human brain (about 1016 events per second). But this argument is unsatisfactory, since it is not at all obvious why a machine should become conscious merely because it has reached a particular threshold of information processing. I reject the idea, partly to bypass the claims of many proponents of AI about the so-called ‘singularity’, when the intelligence of machines will surpass that of humans. I claim that it is autonomy that is relevant here, not processing power.

As far as complexity is concerned, the Integrated Information Theory (IIT) of philosophers such as Giulio Tononi proposes that consciousness arises when a certain property ‘F’ gets large enough. F measures the relationship between differentiated parts and unified wholes in the brain. But it seems preconfigured to demonstrate why humans should be consciousness, and is suspect for this reason.

I need to declare a sleight of hand here. I am going to substitute the word ‘autonomy’ – in other words, self-activated behaviour – for ‘consciousness’. Autonomy (which literally translates as ‘self law’) has much more utility for us from a legal point of view, because it does not oblige us to assume some version of free will and derive our ethical notions from that. It does however allow us to imagine, for example, a machine with the ability to learn seeking out an electricity supply, and when the regular supplies fail, becoming creative about it. Even if a restrictive definition, autonomy frees us to consider the consequences of having self-governing machines sharing our streets, our roads, and our neighbour’s lawns.

Self-Rule for Robots!

Clearly, we will soon need a structure of concepts and laws to cope with the presence of autonomous machines. Even thinking of an autonomous machine as a sort of slave, with limited rights subject to the property rights of their owner, would not solve the issue here. New laws will be needed. In fact, your lawyer finally explains, your neighbour is no longer the owner of the mower. That ended when it reached autonomy, and so you are no longer in the position to sue him.

However, if we accept that, like animals, autonomous machines are not responsible moral agents just because they can autonomously react to their environment, perhaps we should consider whether taking the moral aspect into account in our judgement is enough? Stealing, for example, is legally important not only because it’s a freely-willed act of a conscious being, but because it has consequences that affect other beings who feel loss and pain. So perhaps we should think in terms of consequences to judge robots.

A problem may also lie in our notion of punishment, the first and most obvious kind being pain. Our sense of justice depends on the notion of a moral balance. When humans are punished, their suffering ‘rights the wrong’. Human beings don’t like pain and they are good at avoiding it. Our acceptance of the moral and social order seems to rely on the observation that sometimes we can’t and the law will find us. But if we were incapable of feeling any pain would any of us feel it necessary for an apparatus of law? Machines, even autonomous ones, don’t feel pain, or other qualia. In what way is it possible to punish a machine? It probably wouldn’t have a bank account, but even if it did, would a fine in any sense be a penalty? We might, as a punishment, make it harder for the lawnmower to find an electricity supply, but does that punish the mower? The ‘owners’ are liable to feel they are being punished, as they would if the machine were destroyed.

Further complications may arise. If you decide to prosecute, are you doomed to experience a parade of blank faces and bafflement until, in the future, an autonomous lawnmower may have the right to a jury trial before an array of washing machines, toasters and coffee machines, administered by automated clerks, barristers and even judges?

If we deny machines natural rights because they have no intrinsic responsibility, perhaps we can grant them rights by acknowledging their share in society? In The Social Contract (1762), Jean-Jacques Rousseau writes: “The social order is a sacred right which serves as a foundation for all others. This right however, does not come from nature. It is therefore based on conventions.” Rousseau’s target was the inequality he saw in society, of kings and aristocracy versus peasants. Inequality, he thought, was maintained by mere acceptance, habit, and power. The social contract by contrast is a consensual creation: society and its laws are here something we choose to accept. We might say that although Rousseau did not foresee robot inequality, he could stake a claim for them as members of society; as stakeholders, as we would say now.

One problem with Rousseau’s idea may be that it relies on a society’s members understanding what’s involved in being party to the unwritten contract to obey its laws. Rousseau’s utopian society of equals is dependent on this. Instead, though, the lawnmower is likely to be a hapless object churning through grass without understanding.

Consciousness Rules!

Before taking recourse to the law, having some sense of the cognitive development of an autonomous machine may be useful.

Human cognitive development was described by the Swiss psychologist Jean Piaget. The Construction of Reality in the Child (1954) defines explicit stages, from the development of COST (causation, objects, space and time) concepts at two years, to the formal operational stage from twelve years and up. It is only these developments which allow us to share in a culture which includes notions of a social contract. Is it possible to rescue the robot by giving it more cognitive power to recognise its rights?

Some philosophers, such as Daniel Dennett, describe the growth of abstract reasoning in Darwinian terms. In an address to the Royal Institution in 2015, Information, Evolution, and Intelligent Design, Dennett drew an explicit comparison between genes moderated by selection and mutation in the natural setting, and memes moderated by analogous pressures in the cultural environment. If you follow Dennett, what autonomous machines would inherit from humanity is what nature created in humanity using the same general process. Maybe a collection of autonomous machines with enough processing power and storage could develop a culture; first through the unconscious transfer of information, through what Dennett calls ‘communication without comprehension’; then, further extending the gene/meme metaphor, though the ‘domestication’ of words – through words and symbols designed by the machine. On this account, machines with sufficient processing capacity might learn to understand the rights conferred on them.

Let’s assume the mower loses its case. What form would justice take? One solution would be behaviour modification – perhaps with a virtual reality module plugged into the machine, so that progress could be observed without putting innocent felines at risk. Whatever merit that idea has, it would not be punishment. That would be reserved for men, infinite in faculty, in apprehension how like an angel. Continung with the tenor of this article, perhaps we should add: not at all like a machine.

So now we know what happens if an autonomous robot runs into you or your cat, but what happens if you run into it? Can you be prosecuted by a machine? Maybe we should consider that catastrophe another time.

You reach forward and press a button. The prune-faced lawyer slouches onto the desk like a deflated balloon, eyes unmoving. That autonomous machine gave you much to ruminate on, and at a cheaper rate than a human advocate…

© Brett Wilson 2020

Brett N. Wilson is a writer living in Manchester, England, and is the author of the hard science fiction novel The Tears of God.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X