×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Ethics & Uncertainty

Michael Jordan asks how knowledge of circumstances affects our morality.

Sometimes we have to make moral decisions without having any certainty about the consequences of the different courses of action that are open to us. Would knowing the probabilities of the results of our actions change our moral decisions? To consider this question we’ll start with the classic thought experiment about the runaway trolley, and then give it a twist by introducing uncertainty.

Cosmic Dice
Cosmic Dice © Amy Baker 2019. Please visit instagram.com/amy_louisebaker

Off Our Trolleys

The thought experiment known as the Trolley Problem was introduced to ethics by the late Philippa Foot. I first heard of it when my lecturer asked our Actuarial Science honours class about it. The thought experiment goes as follows: You see that a runaway trolley car (or tram, if you are British) is about to kill five people walking along a track. But you are standing by the lever that switches the points, and if you pull it the trolley will divert onto another track where only one person is stuck, whom the trolley will then kill instead. Time is running out. Do you pull the lever?

To my horror, I was the only student who didn’t want to pull the lever. Everyone else had a simple reason for pulling it: five lives are worth more than one.

On the surface this sounds rational, and I was cruelly mocked for not pulling the lever. I’ve spoken to other friends about this, and some have even suggested I’m a psychopath! For this reason, I think I should defend my position.

For one thing, I reject the statement that five lives are necessarily worth more than one life. To me life is non-fungible – meaning that each life is so unique and multidimensional that we cannot compare lives, nor say that one group of lives is worth more than another group of lives just because the first group consists of a larger quantity of lives than the other. To illustrate this point, let us flavour the thought experiment a little: What if the five people were notorious serial killers, and the one person on the other side was your mother. Would you still pull the lever?

The vast majority of people I posed this new version of the thought experiment to changed their decision and decided not to pull the lever. They would rather let five serial killers die than kill their own mothers. They were not pleased when I pointed out the inconsistency in their morality. No longer are they being utilitarian – the greatest good for the greatest number. Instead they are being biased and judgmental.

We can also break the utilitarian view by framing the thought experiment in another setting. You’re a doctor at a hospital and your ward round contains five people dying from various organ failures. A healthy person walks in, delivering a pizza maybe, and you know that if you could harvest that person’s organs you could save the other five people. Would you give the order to kill him?

Not many of my friends want to now sacrifice the one life for the other five. The reasons they give me for this are interesting in their variety; but the point stands that their position again changes away from utilitarianism. Here they mostly seemed to change to Kant’s deontology, or duty-based ethics.

Immanuel Kant argued that every human being is an ‘end in themselves’: a basic moral unit who is due basic moral consideration, and should never be used merely as a means to other ends. Kant also said that you should act only in such a way that you would be willing to make the principle of your action a universal principle for all moral beings. (Jesus said it more eloquently: “Do unto others as you have them do unto you.”) You don’t want to make the utilitarian principle ‘Sacrifice others for the greater good’ a Kantian universal principle, as someone might want to sacrifice you for some greater good. So this is why people wouldn’t usually want to harvest someone’s organs.

Deontologists versus Utilitarians: Infinity War

The 2018 movie Avengers: Infinity War contrasts these two philosophies. Captain America, the heroes’ leader, takes up the Kantian position, and Thanos, the villain, takes up the utilitarian position.

To those few of you reading this who haven’t seen the movie, Thanos wants to kill half the population of the universe to save the other half from the effects of overpopulation. To kill half the universe, he needs all the infinity stones. One stone is inside the head of a character called Vision. There’s a discussion around whether the heroes should quickly destroy that stone, even if it means killing Vision, so that Thanos can’t carry out his plan. Captain America says, “We don’t trade lives” and refuses to kill one person even to save half the universe. Thanos (spoiler alert!) gets the stone anyway and kills half the universe, including Vision.

Did Captain America make the right decision? If you were Captain America, what would you have done?

To those who say Captain America should have allowed Vision to have been killed to stop Thanos, I point out that if we accept Thanos’s assumption that the only way to save the universe is by halving its population, then your utilitarian motives for killing Vision to stop Thanos are paradoxically thwarting Thanos’s greater utilitarian plan.

Let me burden you with another question. Suppose you are directing, from Earth, a mission returning from Mars. On board the ship are two astronauts. But because you were so busy trying to build driverless electric cars on Earth, you didn’t pack enough food for both astronauts to survive the return trip, only enough for one. Oops. Do you allocate all the food to one astronaut and starve the other; or do you let them share the food, in which case, according to your calculations, they will both die of starvation before reaching Earth?

Most of my friends choose to save one astronaut. After all, one survivor is better than two dead. Well, this is the thinking of Thanos. You can’t starve the one astronaut and also say Thanos is wrong. And if you’re morally inconsistent, it means you aren’t thinking correctly.

It might be worth pointing out that Thanos’ plan has a strong foundation. He has all the infinity stones, so he knows everything, and that this is the only way to save the universe.

In Search of Another Way

Nevertheless, the statement ‘There must be another way’ comes to mind, as it’s how I would respond to the runaway trolley. I would not do nothing. Instead, I’d look for a brake or some other way to save the people. But my Kantian position would never let me consider the trading lives option.

Captain America probably had the same mindset – there must be another way. In fact, most superheroes in most movies take this approach, and down to sheer luck they are usually able to save the day without having to sacrifice anybody.

Maybe this is what’s happening with the other thought experiment, in the hospital. Do you kill the healthy person to save the other five? People say no. And I’ve realised it’s not the same as the runaway trolley. With the trolley thought experiment, people usually assume a determined outcome. With the hospital thought experiment, people don’t make this assumption. Instead, they’re thinking, “There must be another way. Maybe we can give the sick people medication, or perhaps we can harvest organs from someone who’s already dead.” Could this tacit assumption be the key to our moral responses?

I decided to test this on a new group of friends. Once again I presented them with the classic trolley thought experiment; and once again they all wanted to pull the lever to save the five at the expense of the one. This time, however, I then took a different approach to show them that their utilitarianism is on shaky grounds: I provided them with ‘another way’. What if there actually is a brake on the trolley – but it only has a 50% chance of stopping the trolley before it kills the one or the five? And unfortunately, there isn’t enough time to pull both the brake and the lever. What would you do? Would you pull the brake, or would you pull the lever?

Most of them pull the brake. They say they do this because there is a chance of the best case scenario where no one dies. I challenge them, pointing out that 50% of five is 2.5: therefore there is an expected 2.5 people dying by pulling the brake, and only an expected one person dying by pulling the lever – so based on utilitarian principles, you should pull the lever and not the brake. They still insist on choosing to pull the brake.

I think what we’re seeing here is that in a deterministic scenario people tend to a utilitarian position, yet as soon as we add uncertainty, people’s ethical reasoning changes. Why? Is it because uncertainty introduces hope and we’re naturally optimistic? Or is it because the very uncertainty removes a degree of responsibility: “It’s the brake’s fault. Those people were unlucky. I’m not to blame”?

Captain America believed he had a chance to save everyone, and so chose not to sacrifice Vision. Thanos believed that there was no other way, and therefore he acted in a utilitarian way.

Further Uncertainty Principles

How much uncertainty is needed to change from utilitarianism to Kantianism or something else? If the brake only has a 10% chance of saving the five lives, would you still pull it? What if the chance of the brake working is unknown?

In life we usually don’t know the probabilities associated with our actions; we can only guess them, and there’s a lot of room for error. Sometimes we pretend probabilities don’t exist. In other words, we round them off: if something is likely to occur, we might round the probability up and say it’s certain to occur; if something is unlikely to occur, we might round down the probability, and say it will never occur. Yet Actuarial Science has taught me that nothing is certain to occur. The Actuarial profession is built around dealing with uncertainty. This view is in direct contrast to determinism, which says you can predict the future with certainty if you have sufficient knowledge of the past.

However, could it be that ethical differences are down to different views on the nature of reality? Determinism results in utilitarianism, whereas indeterminism results in something else.

Why is this important? Well, maybe a lot of ethical debates could be resolved if people took a step back and stated their assumptions around the chances of outcomes. So maybe ethical debates should be started by asking everyone to state the potential outcomes they foresee as a result of the actions being considered, and the rough probability they assign to each occurring.

Understanding peoples’ different views on uncertainty and the assumptions on which they base their arguments could go a long way to reducing conflict.

© Michael Jordan 2019

Michael Jordan recently became a Fellow of the Actuarial Society of South Africa.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X