Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Editorial
The Conscience Of The Machine
by Wendell Wallach
Computers touch nearly every facet of modern life, from desktop systems that facilitate entertainment, communication and research, to financial systems that initiate millions of transactions on world markets daily. The robots are also coming. They are already evident (although perhaps not recognized), as household appliances, manufacturing systems, museum guides, hospital delivery systems, toys, and even as lethal weapons deployed in Iraq and Afghanistan. Soon we will have service robots caring for the elderly and homebound. Engineers have always been concerned about the safety of the systems they build. However, designers of computerized systems cannot always predict how they will act in new circumstances with new inputs, therefore safety requires that as the technology becomes more sophisticated, computers and robots will become moral reasoners. So philosophers and engineers are starting to work together to build computer systems and robots that can make moral decisions.
The development of autonomous computers and robots making decisions that can increasingly affect humans for good or bad has given rise to a new field of inquiry, variously known as Machine Morality, Machine Ethics, Friendly AI, Artificial Morality, and Roboethics. It focuses on the prospects for building computers and robots that are moral decision-makers. Designing computers and robots that will not harm humans is both a practical and a philosophical challenge.
‘Machine Morality’ addresses a number of questions:
• Are machines the kinds of entities that can in principle make moral decisions? If so, why? If not, why not?
• Is moral decision-making something that can be computerized? If so, how?
• Does humanity want machines making moral decisions? When? In what circumstances?
• Whose or what morality should be implemented?
As well as the computational possibilities, the philosophical issues that arise in thinking through these questions are very rich.
The ethical behavior of machines is determined by the values designed into their systems. Initially, complex machines will operate in limited contexts, and to the extent that the designers can predict all the situations a machine will encounter it will be (designed to be) operationally moral. However, as systems cross the threshold where the designers and engineers can no longer predict how they will behave with new inputs, the machines will need a kind of functional morality. That is to say, they will need to process an array of moral considerations in the selection of a course of action.
Whether machines will eventually be artificial moral agents with human-like intelligence and self-awareness is a subject upon which theorists disagree. However, this topic already stimulates serious reflection by philosophers, legal theorists, and futurologists. Discussions about when artificial agents might be held responsible for their actions, and whether they might deserve property and civil rights, have furthered the understanding of moral agency and legal responsibility.
While futurology is full of fascinating and illuminating thought experiments, given the relatively primitive state of present-day AI research, the discussions tend to indulge highly speculative possibilities. For the foreseeable future, machine morality will be mainly about ensuring that autonomous systems are safe and that their actions reflect human values.
This issue offers several articles that touch upon different dimensions of machine morality. The first, ‘The Challenge of Moral Machines’ extends my brief introduction to this new field by outlining the basic issues.
Much of the excitement arising from machine morality is due to the way in which it forces us to consider human ethics and decision-making in new ways. For example, is it essential to have emotions or consciousness to be a moral agent? Steve Torrance inquires ‘Will Robots Need Their Own Ethics?’ This is followed by the analysis of possibilities James Moor proposes for thinking about artificial moral agency in ‘Four Kinds of Ethical Robots’. Analyzing the computational requirements for implementing a rule-based theory of ethics such as utilitarianism or Asimov’s laws for robots, is one approach to building moral machines. Tom Powers’ discussion on ‘Machines and Moral Reasoning’ considers how a computer might follow Kant’s categorical imperative. Finally, the team of Susan Leigh Anderson (philosopher) and Michael Anderson (computer scientist) have been at the forefront of both philosophic reflection on and experimental implementations of moral decision-making by computers. In an article the Andersons co-authored, they introduce some of their experiments and consider ‘How Machines Can Advance Ethics’.
On behalf of all the contributors to this special issue, I invite you to join us in the philosophical enquiry stimulated by the practical need to build moral machines.