×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Just Science Fiction?

Andrew Reynolds says not..

For decades, science fiction has explored our distrust of computers. The examples are legion: HAL in 2001; Proteus in Demon Seed and the Defense Network Computers in The Terminator to give but three. So, it would be easy to imagine that computer ethics was a sub-genre of SF rather than an area of serious philosophical study.

But what these and many other SF treatments of computing have in common is that the computers are sentient. The computers in these stories have a value system that informs their capacity for free action. We find these tales powerful because the value systems portrayed mirror our own. The computer is often set in the role of the dominant entity that negates the rights of others in the pursuit of its own. So, in many cases SF is making real and valid points not about computing but about humanity and our paradoxical capacity for inhumanity.

But to see the relevance and indeed need for computer ethics today one only has to recognise two factors. Firstly, that sentience is necessary to be a member of a moral community but not necessary for decision making. Secondly, that the functioning of western society in its present form is now completely dependent upon computing.

In short, we have constructed a society where many decisions are either made by, or based upon data produced by, systems that lie outside our moral community – computers. In fact in the western world the influence of computing is ubiquitous and often invisible, from the national grid to the fuzzy logic washing machine – computers are everywhere. Because of this, the ethics of the creation of computer systems and of their use is necessary to an understanding of how our society functions, and a key factor in determining how we believe society can and should function.

Computer ethics defined

What then is contemporary computer ethics? The scope of the subject is laid out well in Walter Maner’s paper ‘Unique ethical problems in information technology’. In it Maner characterises computer ethics as being concerned with ethical situations that are either unique to computing, or where the use of computing significantly alters the ethical nature of the situation.

In practice, computer ethics is a broad inclusive subject that is often cross-disciplinary. Topics studied under its banner include: the social effects of tele-working; the rights of individuals and societies to access technology; the effects of globalisation of information and, of course, information privacy.

Decisions, decisions

Decision-making in some form or other is crucial to any ethical study. Indeed ethics could be described as the study of the basis for and results of human decision-making. Recently in several areas of ethics, including medical ethics, the decision-making process itself has come under scrutiny. For example, informed consent and living wills are currently hotly debated topics. Issues such as these require us to analyse what we think of as a valid decision-making process e.g. if one agrees the rights of individuals in a ‘living will’ scenario, is there a process that properly executes these rights?

So, to untangle situations where the ethical nature of decision-making itself is in question, I suggest an enquiry should be split into the following three areas:

  • The process of decision making.
  • The decisions that are being made.
  • The context of decision making.

Rather than going through each of these abstractly, I will apply them directly to computers. In this light I suggest the considerations become:

  • What is the intrinsic capacity of a computer to make accurate decisions?
  • What decisions do computers make?
  • What is the context in which computers make decisions?

How well do computers make decisions?

This question can be taken at two levels. The first involves an analysis of the type of decision that we believe computers can validly make. The second is concerned with our actual and potential abilities to accurately implement decision-making in systems.

At the lowest level of detail a contemporary computer is totally accurate, any particular logic gate is either on or off. But a question about decision making makes no sense at this level. For a computer to perform an operation that we can consider a decision, it must be working at a level where it is manipulating some encoded representation of that decision. So, the accuracy of decision making is in part dependent upon how accurately the given decision can be represented in a computer and in respect of that decision, how accurately we can communicate with the computer i.e. ask the question and receive the answer.

Let’s take a couple of examples: a question such as ‘What is the result of 1 + 1 ?’ is fairly unambiguous, seems easy to represent accurately and has an answer that is both known and uncontested. Whereas a question such as ‘Am I in Love?’ is extremely difficult to represent and may or may not have an answer. Many computer systems operate somewhere in between these extremes. Computers are used to answer questions of a type that are often simple or complex dependent upon one’s perspective e.g. the question ‘Should this person receive a mortgage?’. I will return to a computer’s capability of answering this particular question shortly.

The next aspect of a computer’s capability to make decisions is: given a particular decision, how accurately in practice can this be encoded in a computer? This comes down to the accuracy of translation of some ‘real world’ decision process into computer code and the accuracy of the various levels of code that translate the process ultimately into the binary code that a computer chip’s logic gates deal with.

The trouble with the translation from ‘real world’ to binary code (and back again) is that the task is extremely complex. Modern operating systems (the computer code that sits between an application, such as a wordprocessor or a computer game, and the computer chip) are said to be the most complex structures that humanity has ever designed and created. In addition to this, the application programs that sit on top of the operating system are themselves highly complex.

As a result, comprehensive testing of the validity of the code that runs a modern computer is, in practice, impossible. To test every branch of the complex decision-trees that make up modern software would take years or possibly decades. This timespan is further extended by the need to re-test software after errors have been found and rectified (a process that often introduces more errors), and the need to validate the testing methods, tools and data themselves.

In practice, only the decision branches most likely to be utilised are tested with data in an expected range. The degree of testing is determined by the economics of system development and the primary use of the system.

So, error is an intrinsic property of modern computers and computer software. This of course presents no ethical issues in itself, but if a computer is used in a situation where an error would have ethical consequences then it becomes an important factor. Given the range of applications of computing today and the interconnectedness of computer systems it is virtually impossible to isolate all the possible effects of error within a single system. Thus it seems reasonable to take into account the possibility of error in an analysis of all computer systems unless particular circumstances mitigate otherwise.

In practice, taking potential error into account in the implementation of a system means putting in place sufficient extra-system procedures to protect the rights of the individual. Further, the risk analysis carried out as part of a systems implementation should take into account probable results of error on the subject e.g. the customer; rather than just the risks to the system owner e.g. a company. Perhaps it should also be mandatory, whenever a new system is implemented, to put in place financial contingencies covering the effects of error on the individual. However in practice, little of this is ever done.

What decisions do computers make?

The computer as decision maker can play many different roles; these can be divided into four broad types:

Filter As a filter a computer is making decisions that limit our choices in some respect, often in ways to which we do not give a second thought e.g. font and layout options in many packages, pre-defined maps in graphics packages and so on.

Automata As an automaton a computer is generally managing processes where decisions are based upon feedback of data defining the status of the system, such as the fuel injection system of a car or the cooling system of a nuclear power system.

Administrator The computer as administrator generally carries out tasks that existed in some form or other before the advent of computing; for example, accounting systems.

Oracle The oracle computer is in a sense at the extreme end of the administrator role. In this role a computer makes, or appears to make, complex choices – for example, whether to offer a mortgage, and on what terms.

Computer as Oracle

To take the case of computer as Oracle, if one asks ‘In this role what decisions can a computer be used to make?’, then in an important sense the answer is: just about any decision a human can take. However in each implementation one needs to ask: ‘what decision is the computer actually making and is the decision making process valid?’

If we take the mortgage assessment system I have mentioned as an example, the underlying decision that the computer is making is whether some set of data (usually personal finance history and socio-economic information) matches some predefined set of criteria. Now it is possible for those criteria to contain a heavy bias towards white males working in the computing sector. So, it is possible that while seeming to encode ‘is x eligible for a mortgage?’ one could be encoding attitudes such as institutionalised sexism and racism resulting in the computer actually assessing the question ‘does x’s data-set match the characteristics associated with white males?’

It is important to note that due to the complexity of computer systems and the often highly distributed development process, even the system’s designers could be quite unaware of in-built prejudices. This is because no one person would understand enough of the system to be able to detect them.

In what context do computers make decisions?

What is the context in which a computer decision is made? It depends why you are asking. For the purposes of this general introduction to the subject I will look primarily at the social context, and focus specifically on the power relationships involved.

Computers do not have opinions, they are not prejudiced and at the very lowest level they do not make mistakes.

Because of these facts computer applications are often seen as ethically neutral and valid (in all senses). But as I hope you will appreciate by now this is far from the case.

Take again the example of the mortgage assessment system. Suppose that a mortgage applicant is refused a mortgage by some institution on the basis of the system’s output. In many cases the system’s decisions are imbued with the authority of the corporation; combine this with the oft-supposed neutrality and infallibility of computers, then to whom can the applicant appeal? In practice the person operating the system often has neither the authority or the skills to challenge the computer’s output. Indeed this may be a direct result of de-skilling and supposed cost cutting which was a part of the reason the system was implemented.

Now suppose for a moment that the reason that the applicant was not accepted for a mortgage was because of a system error that resulted from some rare combination of data almost unique to the applicant. Suppose further that the error was produced by a combination of errors in the hardware, operating system and application software. And lastly imagine that the mortgage refusal leads to some calamitous circumstance for the applicant. In this type of case it is not clear to whom the applicant has a right of appeal, or who if anyone could be considered responsible for the error. In fact in a case such as this determining whether there was a computer error and what it was let alone fixing it would cost substantially more than the value of the mortgage that was applied for.

A practical ethic of computing

So, it would seem that there are certain kinds of decision to which computer systems are well suited and others to which they are not. The innate properties of computer systems demand that certain questions are asked when computerisation is considered, such as whether the intended decisions really can be validly encoded within a system or not. Also it seems essential for the system developers to challenge and validate the underlying nature of the procedures that the system will automate before they are imbued with the ‘neutral’ authority of a computer and the power of the organisation.

In practice these considerations are rarely taken into account. The factors considered when determining whether a system should be installed are generally short term, localised and economic. Further, organisations are able to implement systems almost as they see fit. The only legal restraints on them are general laws in many cases inapplicable to computerised situations, and some specific laws relating to issues such as data protection and computer misuse.

So, should we just ban computers, turn them all off now? Well, no. What makes computer ethics an extremely challenging area is that computer systems often bring massive benefits both to the organisation creating the system and to us, the general public. Just think of heart monitoring systems and other computerised devices in intensive care units. The problem with the ethical status of any particular system is a general one, one that applies to any decision with a clear local short term benefit and a potentially non-localised longer term cost that may or may not directly affect the decision maker. To take our mortgage example for the last time. While certain prejudices may be encoded into the system, are they any worse than those existing within contemporary society? Even if they are, it might be argued that the increased convenience of faster, cheaper mortgage quotes outweighs this cost.

Unfortunately this argument could only practicably be applied if computers systems were isolated and could be considered on a simple case-by-case basis. But this is not so. Computerisation is already widespread, systems are interdependent and the nature of our society is in part dependent upon them; it is simply not possible either to re-evaluate all the systems that have been implemented or to look at new ones in isolation.

This being so, it is necessary for a practical ethic of computing to be implemented by governments, companies and individuals. This ethic would contextualise computers within society and contextualise social interactions, and especially decision-making, within the computerised environment in which they often occur. That is, it would be necessary to see computers for what they are: an automated encoding of our society with all its prejudices and fallibility.

I realise that this seems like an impossible dream. It would be if it were not for a single factor: the year two thousand. The ‘Y2K bug’ or ‘Y2K crisis’ has, ironically, come just at the right time.

What the Y2K crisis will make clear is the dependence of society on computing that has so far gone as largely implicit. It will demonstrate in a very simple way that even when apparently doing the most elementary sums computers get it wrong and that this has consequences that affect all of us directly.

Therefore we should welcome the Y2K bug as a sharp reminder of what we as a society have become. A reminder that ever since we started to use flint tools we have had cause to reflect on what those tools make us; and have need to take care that we do not become subordinate to our own creations.

© Andrew Reynolds 1999

Andrew Reynolds writes about the implications of new technology, lives in a very old house, and studies philosophy at Birkbeck College. He’d like to thank Gary, Ivana, Olivia and Cathy at Birkbeck for help with the article.