Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Books
Living with Robots by Paul Dumouchel & Luisa Damiano
Richard Baron wonders what changes in our own thinking it would take to live with sociable robots.
The robots are coming. Many are already here. They perform surgery, assemble cars, fetch and carry in warehouses, and clean floors. These robots are simply tools which do the jobs we give them, even when they use a degree of artificial intelligence to discover ways of working that their programmers had not imagined. But soon there will be robots which will provide us with social interaction.
Social interaction may be desired just for its own sake, or it may be needed for therapeutic reasons. We may simply want to be entertained when our human friends have better things to do. Some people are housebound, and others have conditions which require kinds of interaction carefully tailored to their needs. Alternatively, social robots may be needed because social interaction is essential for performing specific tasks, such as acting as a receptionist or looking after children.
Social robots will interact with us in a new way, although this ‘new’ form of interaction is actually a very old one between human beings. The robots will respond to us creatively as we respond to them, and will build up feedback loops which will improve their social relationships with us, and our relationships with them – just like the feedback loops that have always arisen between people.
The central philosophical claim of Living with Robots is that in order to understand how social robots will fit into human society we should reshape our conception of conscious beings. We should no longer think of conscious beings only in terms of an internal mind which manufactures thoughts and emotions and then launches the consequences on an external world, but in terms of the relationships that agents have with the world. The clear implication is that such thinking will spill over into our philosophizing about the human mind.
Chapter 1 sets out the characteristics of a plausible social agent. Most unnervingly for those who like robots to be obedient tools, a social agent must be authoritative when appropriate. A receptionist must be able to ask us to wait in the lobby. A child-minder must be able to say, “George, don’t do that!” without provoking the thought that it’s just reciting lines in a script triggered by input from the environment. For that would destroy the agent’s authority, turning instructions into mere suggestions. A social agent must also be physically present, not a disembodied voice or a set of instructions on a screen. Our basic mode of communication with human beings is face-to-face, and physical presence gives scope to act as well as to communicate. Finally, a social agent must have enough autonomy to effectively adapt to changes in its environment, for example when its work is interrupted, or a human being around it behaves in an unusual fashion. Without this autonomy, it would soon fail to coordinate effectively with human beings.
Chapter 2 sets out various existing positions in the philosophy of mind, from Cartesian dualism to more modern materialist positions. These are all seen as locating the mind within the subject. In Chapter 3 the authors steer us away from this picture of an internal mind. They start with a dubious use of Descartes. While he did focus on the internal mind (most famously by doubting the existence of everything external but being left with the certainty that he was thinking), they see him as only having been able to do so because he thought about his functioning in an environment – even if it might contain an evil demon. The authors, Dumouchel and Damiano, believe that this undermines the case for seeing the mind as purely internal to the subject. But for his thought experiment to work, Descartes only needs the possibility of an environment. That is not enough to undermine the picture of an internal mind and so to support the authors’ own conclusion, which is that “the mind is neither in the brain, nor in the head, nor outside the agent… but in the relations that obtain between epistemic agents” (p.101).
The authors develop their case for this conclusion by surveying work on social robots. They acknowledge that attempts to get emotions and empathy right in robots appear to reinforce the picture of an internal mind and an external world. Some research concentrates on producing artificial versions of the mechanisms of human emotion, while other research concentrates on getting the appearance right, so that people will see robots as having appropriate emotions. But, the authors argue, we should see research of both sorts as breaking down the distinction between ‘internal’ and ‘external’. An internal mechanism is designed to yield states and behaviour that are appropriate to the external environment, the objective being harmony between the robot and the environment, including people and other robots. In turn, some actions by the robot are intended to produce appropriate internal states in human beings. The ultimate goal is an ‘affective loop’, in which a robot and a human being react to one another in ways that deepen their social relationship and make it more effective.
While this case for discarding the old picture of internal mind/external world is engaging, it does not establish that this is what we should do, even in relation to artificial minds. Granted, people who build robots may need to think in terms of all the participants in a social environment and concentrate on the interactions; but it might still be appropriate to see that network of interactions as emerging from separate thinking and feeling entities, rather than seeing the thinking and feeling as emerging from the network.
Wake Up by Cameron Gray, 2019
Please visit parablevisions.com and facebook.com/camerongraytheartist
The authors do however say more to support their conclusion. In Chapter 4 they set out work in psychology and neurology which characterizes emotions as social – as arising out of interactions between people and coordinating their behaviour – rather than as starting within the individual then emerging into the world. But neurons are inside the skulls of individuals. The substantial contributions that cognitive neuroscience has already made to our understanding of social interactions suggests that a picture which builds up from individuals to their interactions can still do useful work.
The authors also consider some social robots, in particular a cute toy seal called Paro which responds to its name and to being stroked, and which has turned out to be of therapeutic value. Paro does nothing when left alone. Its apparent enjoyment of life only exists in relation to the human beings who play with it. In this way it fits the picture of emotions as arising out of interactions. But we should be cautious when reapplying this argument. Human beings do have emotional states when on their own. Even human beings brought up without social contact have them. And social robots a good deal more complex than Paro might have them too.
Finally, Chapter 5 considers robot ethics. At the opposite end of the scale from cute robot seals are military robots. We might want to program them so that they automatically observed the Geneva Convention. That would be reassuring. But the military would want the power to override that constraint in exceptional circumstances, and that veto would be concentrated in the hands of a small number of senior officers. It might be better to allow the robots themselves a bit of leeway while equipping them with artificial empathy so as to deter them from committing atrocities. More generally, social robots are going to need the flexibility to respond appropriately to the requirements of the human beings with whom they work, and rigid rules will sooner or later prove inadequate. Social robots will lose all credibility if their obedience to rules reduces them to simply saying, “Computer says no.” If they are to have as much flexibility as human beings, they will need an appreciation of ethics that goes beyond knowing the rules – an appreciation that allows them to think about the rules, and occasionally to challenge them.
Overall, this book offers plenty of food for thought. It does not establish that we should change our paradigm for the philosophy of mind, but it opens up the possibility and shows how it could be developed. And if anyone is expecting the robot revolution to change nothing much apart from the number of hours we have to work, this book will surely change their mind about that.
Finally, although the book is well-produced, the publisher should be told off for one failure of layout. Notes, including many which enlarge on the argument rather than merely giving references, have been shunted to the end of the book rather than being made easily accessible at the bottom of the page. Now that typesetting is done by computer, there is no excuse for this.
© Richard Baron 2019
Richard Baron is a philosopher in Cambridge. His website is www.rbphilo.com.
• Living With Robots, Paul Dumouchel and Luisa Damiano, Harvard University Press, 2017, 280 pages, £21.95 hb, ISBN 978-0-674-97173-8