Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Articles
Climate of Disbelief
Paul Biegler asks whether the way we form our beliefs means we’re hardwired to succumb to global warming.
In January 2012 physicist Stephen Hawking decided to celebrate his birthday early, with a warning of apocalypse: “I think it is almost certain that a disaster such as nuclear war or global warming will befall the Earth within a thousand years” (BBC Radio 4 interview, 6th Jan 2012). Further souring the ambience, he warned that during “a period of unprecedented climate change, scientists have a special responsibility once again to inform the public and to advise leaders about the perils that humanity faces.” In the wake of ex-UN Climate boss Yvo de Boer’s salvo that the UN’s 2014 Climate Report would “scare the wits out of everyone,” Hawking’s words have fresh oxygen, albeit with lashings of carbon dioxide, too.
But if the physics professor hoped to avert Armageddon by playing Jeremiah, he failed to spot a flaw in his formula. Warning the public is one thing. Having them accept, and then act on, this dire forecast, is another. While the evidence for human-induced global warming has gained substance, the population of naysayers remains decidedly bulky too. A 2012 Gallup poll found only 52% of Americans believe the effects of warming are upon us, down from an earlier 61%; and on causation, just 53% concurred that warming was anthropogenic. Why do so many people, in the face of so much evidence, remain so staunchly unconvinced?
A first guess might be that they just think the science is wrong. Yet the most contentious science – modelling the warming rate and so the imminence of environmental jeopardy – baffles even many experts. Rather, then, many attitudes seem to be shaped by the ‘manufacturers of doubt’, whose minority opinion has gained an over-represented toehold in the popular media. The result is a ‘sampling error’ about the strength of scientific evidence on the issue: disproportionate weight accorded to dubious data breeds erroneous conclusions. Exhortations to downsize our carbon footprint can also cause backlash. Reaction against being told what is good for us is widespread. The ‘boomerang’ effect has proven especially challenging for initiatives to reduce adolescent smoking.
But there is another candidate to explain our reticence to embrace the painful but credible truth of a warming planet. The idea of wishful thinking – we believe what we wish to be true – is hardly new. But its scientific investigation took off in the early 1990s in the wake of a fêted article by the late social psychologist Ziva Kunda, ‘The Case for Motivated Reasoning’.
Biasing Your Own Thinking
‘Motivated reasoning’ describes how emotional commitments affect the way we process information. To see how, try a thought experiment. Imagine visiting your doctor with a cough. If your hacking is extreme, your GP might cave in and order a chest X-ray. Now imagine, when returning for the results, that your physician is uncharacteristically grave: “I’ll be straight up. The news isn’t good. There’s a growth that looks suspiciously like a tumour. We’ll need to arrange some further tests.” If you’re like most, your first response will be disbelief: “Doctor, are you sure? Couldn’t you have mixed up the X-rays? Is the radiologist mistaken? Can I get a second opinion?” Bad news is an unwelcome visitor, and doesn’t cross your threshold without a fight. There would be no such remonstrations if the X-ray were clear. Instead, with relief and an inner smile, you’d settle up with the receptionist and be blithely on your way. There’s little resistance to a diagnosis of good health.
This vignette depicts a famous type of motivated reasoning: confirmation bias. We tend to accept information that agrees with our take on the world, and to discount data that contradict it. Many studies affirm this. People told they scored low on an IQ test prefer articles that criticise rather than support the test’s validity; and women who drink a lot of coffee were quicker to note flaws in a fictional study linking caffeine consumption and breast cancer.
So, do people reject the forecast of global warming doom because it upsets their worldview? John Cook of the University of Queensland is in no doubt in an article in online journal The Conversation of 6th September 2012. Cook considers the work of Anthony Leiserowitz and colleagues from Yale and George Mason University. Surveying over a thousand Americans, their quest was to find evidence that climate beliefs split along the political divide. While the divisions aren’t sharp, Democrats had stronger beliefs in anthropogenic global warming than did Republican or Tea Party affiliates. For Cook, the inference flows easily: “For supporters of an unregulated free market, regulating polluting industries to reduce global warming is so unpalatable that they are far more likely to reject [the idea] that climate change is happening.” And Cook is no lone voice. Columbia University’s Centre for Research on Environmental Decisions devotes a full page to confirmation bias in their guide The Psychology of Climate Change Communication (2009). Of course, as sceptics eagerly remind us, confirmation bias cuts both ways. Those with left political leanings, keen to see big emitters pay, will be partial to the warming data. The tie-breaker is that warming predictions are backed by good scientific pedigree. And although scientists are no strangers to bias, there is a built-in failsafe: published articles must first run the gauntlet of peer review. And as Matt Ridley put it in a recent Wall Street Journal article “Most scientists do not try to disprove their ideas; rivals do it for them. Only when those rivals fail is the theory bombproof” (July 27, 2012). Bias towards conclusions that survive this sort of trial by ordeal is not, surely, such a bad thing.
Confirmation bias is a thorn in the side for climate communicators, but the stakes are high. The battle is over truth, and the enemy is biased thinking. So to glean the knowledge necessary to prevail, a look at the machinations of bias beckons.
The Happy Thought Trap
The student of motivated reasoning must ask two questions: ‘How does it work?’ and ‘Why do we do it?’ On the ‘How’ issue, one school of thought is that wishful thinking reduces cognitive dissonance. The logic is as follows. Imagining that our cherished projects will actually materialise makes us feel good. When someone suggests the contrary, our imaginings become uncertain, which makes us feel bad. We are left with two options. We can give up our aspirations, which is a big ask. Far better to downgrade the credibility of the objector.
You don’t need a weatherman to know which way the wind blows
The ‘Why’ of motivated reasoning is the big question, and while there is no victor among theories, there are some strong contenders. The underlying theory is that evolution endowed us with feelings to propel us away from threat and towards survival and reproductive success. It is no accident that food, shelter and mates impart a warm glow. Your emotions are your genes’ insurance that your behaviour tilts in their favour, often before the evolutionary newcomer, the thinking part of your cortex, has any idea about it.
According to psychologist Paul Slovic, emotional feelings, or ‘affect’, works as a ‘heuristic’. Heuristics are decision-making shortcuts. And we use these emotional shortcuts all the time because they deliver the right answer reliably and more quickly than sitting down and mulling over the pros and cons. Take the task of predicting Wimbledon winners. Yes, you could go over the player’s form, history on the surface, recent injuries, and so on. But Gerd Geigerenzer found that you’re much better off asking yourself if you recognise the player’s name. Novices who simply followed the rule, “If I recognise the name I predict a win” outdid expert commentators at Wimbledon in 2003.
The affect heuristic works the same magic using feelings. The example par excellence is disgust. A wrinkled nose and vomitous urge identify spoiled food, and so preserve good health, well before deliberation on the offending bacillus can kick in.
The ‘affect heuristic’ also makes us project our feelings into the future. Faced with a number of options, we imagine how each might turn out, then ‘tag’ it with feelings. Outcomes that earn positive sentiments are the strongest motivators for action. This confers evolutionary advantage, because what feels good for the goose is probably good for its gonads.
What seems to happen in confirmation bias is that we tag not just envisaged outcomes with feelings, but beliefs too; and just as if the beliefs themselves were good or bad for us, we embrace those imbued with positive feeling, and shun those that accrue disdain. Chris Mooney put it concisely in a 2011 article on the website Mother Jones, ‘The Science of Why We Don’t Believe Science’: “We push threatening information away; we pull friendly information close. We apply fight-or-flight reflexes not only to predators, but to data itself.”
But why would we use our feelings, rather than, say, the evidence, as a basis for belief? On this, the schools diverge along time-honoured neoDarwinist lines: that is, doing so either directly conferred advantage itself, or is a by-product of another evolutionary adaptation.
Let’s consider the by-product line of argument first. Common sense says beliefs should respond to evidence, not desire. Faced with a charging rhinoceros, it may be tempting to believe that it isn’t happening, but to succumb to this temptation would surely be a fatal weakness. American psychologist George Ainslie reckons, when it comes to our prehistory, this intuition is on the money: “Where reward is strongly bound to survival resources – food, warmth, avoidance of injury – the cost of misbelief will be deprivation or pain, so instrumental beliefs will be constrained mostly by their predictiveness” (Behavioral and Brain Sciences, 32, 2009). In other words, back in the day, when things were simpler, true beliefs would generally be accompanied by reward, and so feeling good. Conversely, people who consistently felt good about false predictive beliefs weren’t generally around long enough to take Parenting 101. However, Ainslie says that how our beliefs feel is no longer a consistent guide to their accuracy, since as our intelligence and foresight grew there was an increasing disconnect between beliefs and survival outcomes. Our imaginings could turn to musings, and some of the possibilities were wondrous. We came to believe that good things would come to pass; and the beliefs felt good. But the catch was that any test of the belief’s accuracy lay a long way in the future. Good feelings were a reward for beliefs whose falsity might never be exposed on the battlefield of natural selection. And so the feelgood fest could continue without any effect on survival. Quite simply, feeling good about one’s beliefs in no way legitimises their truth. We can get warm and fuzzy over just about any unlikely outcome.
The ‘by-product’ theory isn’t the only possibility. Instead, motivated reasoning may indeed in itself have offered evolutionary advantage. In The Evolution of Misbelief (2009), Ryan McKay and Daniel Dennett outline two possibilities for how this could be so.
First, daubing one’s psychological canvas with a rosy tint may be beneficial because it lowers stress. Contemplating a palatable rather than perilous future means less worry, and an unfurrowed brow reduces stress hormones like cortisol and adrenalin. In the short term, these hormones trigger life-saving fight or flight responses. When the rhino snorts its intent to charge, these hormones raise heart rate, dilate pupils, and close sphincters, to prepare the body for combat or fast retreat. But chronic stress keeps these hormones pumping into the bloodstream, and that’s damaging. Prominent among the results are high blood pressure, hardening of the arteries, heart attack, stroke, reduced bone density, and diabetes. Chronically stressed laboratory animals go downhill quickly. Rats show the murine equivalent of human depression behavior, becoming immobile, anorexic, and eventually dying. The lesson is that, for humans facing an uncertain future, there was much to gain by lowering the stress levels by accentuating the positive, even if the negative wasn’t eliminated entirely. And this strategy may have proven decisive during humanity’s transition from forager to agrarian. After weathering a few seasons of scarcity and some of plenty, Cropfarmer Man could opt for either eventuality in his nightly contemplations. Avoiding the harms of stress, the optimist may have had the final laugh on his doomsaying brethren.
Other research supports different health benefits of optimism. Psychologist Shelley Taylor and colleagues found HIV positive men who were optimistic about their illness lived nine months longer than those with a stance of ‘realistic acceptance’. So-called ‘positive illusions’ are thought to work by motivating striving. As Taylor puts it, “Dispositional optimism may reduce the likelihood of demoralization or overriding fears and promote active coping efforts to influence the threat.”
If either hypothesis of the benefits of optimism is correct, the somewhat startling upshot is that motivated reasoning may confer advantage upon those it drives to reject the climate science conclusions: if environmental catastrophe is a hundred years off, maybe we should just stress less and reap the benefits of lower cortisol? And perhaps a sunny disposition will make for more effective sandbagging as the waters rise?
There are problems with this analysis. Central to the protective effect of positive illusions is a greater sense of ‘can do’ and better subsequent striving. Yet many climate sceptics seem to take their brighter prognostications as a reason for inaction rather than positive preparation. Further, the evolutionary benefits of wishful thinking only exist when the outcomes are uncertain, such as the equal uncertainty of a flourishing or desultory crop in the next season. If there were no doubt of a crop failure, surely realism and an immediate start on contingency plans would be in order. But concerning climate change, our prospects are increasingly certain, and so our efforts must begin now. And, furthermore, if Ainslie is right, and feeling good about beliefs says little about their veracity, we should hold no truck with any theory of the advantages of motivated reasoning in any case.
How then to persuade the dissenters? Confirmation bias is driven by feelings. Psychologist Jonathan Haidt makes the analogy that our emotions are like an elephant, our reason its rider. Rationality may direct the beast with a few pulls and twitches; but once the creature is set in its direction, flimsy reins are no match. Reason, as enlightenment philosopher David Hume has taught us, becomes slave to the passions. And so climate communicators must appeal to emotion.
This is old news to marketers, who understand all too well the psychology of emotional appeal. The ‘vividness’ effect, for example, refers to the prodigious motivating power of sound and vision. Statistics on the millions of children who die annually from preventable disease in the developing world can be read with barely a hair turning; yet harrowing images of a dying child results in telephones lifted and children sponsored. Stimulating images and sound recruit emotion, bias decisions, and motivate action. Yet climate change messages generally come as either text or talk, and from scientists, not advertising agencies. Engaging the good men and women of Madison Avenue might be a start, then. However, advertising is no panacea. Quite simply, people are wising up to the way it manipulates, and increasingly switching from TV to ad-free downloads. A more informed and subtle answer is to deliver messages in a way that appeals to both facts and values.
Dan Kahan and colleagues provide a fascinating insight into how this might work. In their 2007 study ‘Making Sense of – and Making Progress in – the American Culture War of Fact’, participants read a fictional newspaper article featuring three claims: the Earth is warming; humans are the cause; and disastrous economic outcomes might result. But the article came with alternative headlines. One version read ‘Scientific Panel Recommends Anti-Pollution Solution To Global Warming’, the other, ‘Scientific Panel Recommends Nuclear Solution To Global Warming’. The results were telling. ‘Hierarchical individualists’ – those who favour low taxes on wealth and the rights of individuals over the community – were more persuaded by the climate claims when they fell under the ‘nuclear solution’ title. This finding was consistent with the group’s predilection for an unregulated market. By contrast ‘egalitarian communitarians’ – those who highly value community interests and equal sharing of wealth – thought the climate claims more convincing under the anti-pollution banner. So the same content was differentially persuasive when teamed with cues that spoke to the participants’ values. Chris Mooney’s take on the study was that “Conservatives are more likely to embrace climate science if it comes to them via a business or religious leader, who can set the issue in the context of different values than those from which environmentalists or scientists often argue.” (ibid).
The moral of the story is many-sided. A whole heap of humanity seems intent on ignoring the climatic call to arms. There seems a good chance they are driven by a hardwired predilection for good news even if it departs from the facts. If so, the rallying cry had better appeal to the heart, because statistics and considered prose are unlikely to carry the day. But there is another story here too. Those who would doubt our descent into climate catastrophe may just have the upper hand. If they hedge their bets, and make some preparation for the worst without worrying too much, they might just come through in better shape than the purveyors of doom. But science has got a lot of things right, so there’ll be no bliss if we simply ignore it.
© Dr Paul Biegler 2014
Paul Biegler is Adjunct Research Fellow at the Centre for Human Bioethics, Monash University, Melbourne, Australia. He is a former emergency physician, and author of The Ethical Treatment of Depression (MIT Press 2011), which won the Australian Museum Eureka Prize for Research in Ethics.