×
welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

Articles

Why False Beliefs Are Not Always Bad

Sally Latham argues that sometimes it’s better to be wrong.

It is a fairly common assumption that factually correct beliefs are to be strived for and factually incorrect beliefs are to be avoided. In fact, for many philosophers, the very cornerstone of the discipline is that true beliefs are good and false beliefs are bad.

Yet this assumption is being challenged by Project PERFECT (Pragmatic and Epistemic Role of Factually Erroneous Cognitions and Thoughts). Headed by Professor Lisa Bortolotti at the University of Birmingham, this project aims to establish whether cognitions that are in some important way inaccurate can ever be good for us. Delusional beliefs, distorted memories, beliefs that fail to reflect social realities, and so forth, are frequent in the non-clinical population; and are also listed as symptoms of clinical psychiatric conditions such as schizophrenia and dementia. Project PERFECT investigates whether such beliefs could have redeeming features. The hypothesis is that there can exist false but useful beliefs.

I will explain some of the evidence supporting this hypothesis using two examples from the Project PERFECT research: firstly depressive delusions, and secondly beliefs that fail to reflect social inequalities. Both examples have the underlying theme that there can be epistemic innocence to inaccurate or imperfect cognitions, and in that case, such distortions in belief can be beneficial.

In explaining the concept of epistemic innocence in a paper in Consciousness & Cognition in 2015 Bortolotti draws comparison with the ‘justification defence’ in UK and US law. This is where an act that would normally be considered criminal can be justified under the particular circumstances in which it was performed; for example, if someone knocks someone out to prevent serious harm they might do to themselves or others. The act brings costs, but it’s justified as an emergency response because it spares greater cost that could not have otherwise been avoided. In this emergency situation, the otherwise criminal act is the lesser of two evils. Similarily, Bortolotti argues that inaccurate or imperfect cognitions, for example delusions or factual misrepresentations, can be epistemically innocent if:

(a) They provide ‘epistemic benefit’ – meaning, they’re beliefs that can help us.

(b) There is no available alternative that would confer the same benefit without higher cost in terms of knowledge or beliefs. (‘Epistemic’ means ‘referring to beliefs or knowledge’.)

Depressive Delusions

First we’ll look at depressive delusions, the subject of paper by Bortolotti and Magdalena Antrobus in the Unisinos Journal of Philosophy (May 2016). A partial definition of delusion is “a false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary” (American Psychological Association, 2013). One common depressive delusion is that one is failing to be there for others. Exaggerating one’s failing, or the extent of one’s responsibilities, in this regard can lead to an excessive sense of guilt. Other false beliefs include delusions of persecution or of illness. These delusions emerge in cases of severe depressive disorders (sometimes known as psychotic depression or depressive psychosis), as well as in schizophrenia and other psychoses.

It is important when considering the knowledge benefits of depressive delusions to first distinguish them from schizophrenic delusions. According to a paper by Giovanni Stranghellini and Andrea Raballo (in the Journal of Affective Disorders 171, 2015), schizophrenic delusions provide a (false) ‘revelation’ by uncovering new content that is unfamiliar to the person. A ‘dawn of a new reality’ occurs which alters the person’s perspective; for example, the ‘discovery’ that the friendly behaviour of a neighbour is all part of a plan to spy on them and ultimately harm them. In contrast, depressive delusions confirm previously acquired beliefs related to the self. Nothing new is discovered, only old delusions reaffirmed. Delusions of guilt for example will validate a pre-existing conviction a person has that they are guilty of wrongdoing.

With this in mind we can start to examine the epistemic benefits of such delusions. To understand this, let’s refer to Jean Piaget’s Equilibrium of Cognitive Structures model, as set out in his 1977 book The Development of Thought.

Central to this model is the concept of a schema. A schema is a set of linked mental representations of the world used to understand new situations and how to respond to them. An example would be a schema about how to purchase goods in a shop, or how to classify people according to gender. Schemata have evolutionary benefits in terms of the speed and efficiency of our information processing. When a person’s existing schemata can explain what someone experiences, the result is cognitive balance, also known as cognitive equilibrium.

When someone is presented with a new object or new situation, says Piaget, there are two key processes required for cognitive balance; either assimilation, whereby the existing schema is used to deal with this new object or situation; or accommodation, whereby the existing schema does not neatly apply and itself needs to be modified. The successful development of cognitive structures is known as adaptation, and it requires both processes. In the case of ineffective mental functioning, one of the processes compensates for the deficiency of the other. When equilibrium cannot be reached it is the source of anxiety for the person. The lack of equilibrium we may feel can be understood as cognitive dissonance, which is “the mental stress or discomfort experienced by an individual who holds two contradictory beliefs, ideas or values at the same time … or is confronted by information that conflicts with existing beliefs, ideas or values” (Leon Festinger, A Theory of Cognitive Dissonance, 1957). Inconsistency between existing beliefs and incoming information leads to psychological discomfort we are naturally motivated to reduce. This is one way of explaining why we strive for coherence in our thinking. There is evidence that a prolonged state of cognitive dissonance leads to increased anxiety and symptoms resembling post-traumatic stress disorder (see Anxiety: The Cognitive Perspective by Michael Eysenck, 1992).

But what if there’s some psychological reason why an individual cannot change her beliefs to fit in with new information? For example, in cases of severe depression individuals acquire increasingly negative beliefs about themselves because their processing of self-related information is disrupted. Individuals unable to change their beliefs will attempt to reduce mental inconsistency in other ways – for example, by reinterpreting their experience, completely rejecting the new information, or by seeking additional support for the previous beliefs from those who share them.

In severe depression, negative schemata can be formed early in life but remain dormant until activated by adverse circumstances, often resulting in critically low self-esteem. Once activated, there is a bias towards interpreting new information in a way consistent with the schema, that is, negatively at the expense of positive or neutral interpretations. In some circumstances, positive self-appraisals can actually cause discomfort and anxiety and are rejected in favour of negative ones fitting pre-existing beliefs. For example, despite having a boyfriend, Jane’s negative self-representation includes the belief that she is unlovable and no-one will want her. When her boyfriend surprises her with a thoughtful gift, this show of affection is at odds with her schema and causes cognitive dissonance. Rather than changing her belief about herself being unlovable, she distorts the meaning of this kind action and interprets it as an act of guilt by her boyfriend because he has been thinking about other women, or about leaving her. This inaccurate thinking (given the assumption that her boyfriend does indeed love her) has obvious emotional and other costs. However, if these costs are outweighed by the preservation of consistency and mental equilibrium and the removal of anxiety caused by the dissonance between her self-belief and the evidence, then this belief could be epistemically innocent. The epistemic benefit of depressive delusions can be the preservation of a coherent self-representation, even if it is a very negative one. So Antrobus and Bortolotti hypothesise that in cases of severe depression the distorted interpretation of experience to assimilate it into an existing schema can be epistemically innocent in cases where the cost in knowledge of the distorted negative belief is outweighed by the benefit to that person in terms of reduced anxiety. This is in contrast to the popular opinion that delusions always need to be eliminated as both epistemically and psychologically costly.

As I mentioned earlier, there is a second condition for epistemic innocence – that there is no alternative that would convey the same benefits without the costs in knowledge. In the case of depressive delusions, people have generally formed their negative self-image through a long process of negatively-biased learning to the point that positive information has so long been reinterpreted or not integrated into the schemata that it is not a practical option to update the schemata.

Beliefs That Fail To Reflect Social Realities

Let’s now consider an example concerning inaccurate beliefs about social facts.

In her paper ‘Dissolving The Epistemic/Ethical Dilemma Over Implicit Bias’ (Philosophical Explorations, Vol 20, 2017), Katherine Puddifoot of Project PERFECT considers the issue of stereotyping, specifically, the automatic stereotyping involved in implicit bias. A common definition of a stereotype is that it is a widely held but fixed and oversimplified image or idea of a particular type of person or thing; for example, of females as carers/nurturers or of males as leaders. ‘Implicit bias’ refers to attitudes that prejudice our understanding, decisions, and actions in an unconscious manner. I will continue to use Puddifoot’s examples of gender, although she does use others too.

Ethically, the general consensus is that until we know otherwise we should treat all people as equally likely to possess certain traits; for example, in their attention to detail, commitment, nurturing, and so on. However, if we seek knowledge and understanding then our beliefs and responses should reflect real social inequalities rather than being unreflectively egalitarian; and statistically, some social groups are more likely to possess certain features. For example, at present scientists are statistically more likely to be male (in the UK in 2014, only 13% of people working in the sciences were female, according to the Women In Science and Engineering campaign, WISE). Therefore I am more likely to have accurate beliefs if I adopt certain stereotypes; for example, if I assume a random scientist is more likely to be male than female. This has ethical implications if we want to encourage women to enter science. But as philosophers seeking knowledge, is this just the price that needs to be paid for accurate thinking? Puddifoot argues that the best choice from a knowledge perspective is also best from an ethical perspective by demonstrating the epistemic innocence of inaccurate thinking in some cases – for example, in thinking that scientists are equally likely to be women.

As mentioned, the implicit bias associated with the stereotype that women are not scientists and scientists are not women sometimes brings the epistemic benefit of yielding true assumptions concerning random individuals. If you’re trying to work out which person in a room full mostly of scientists is an administrator, you will sometimes be more likely to identify them if you focus on the females. But there are several epistemic costs of implicit bias. The first concerns distortion of memory. Research shows that if a person is aware of the social characteristics of an individual and those characteristics fit with a stereotype, then the information remembered about that individual increases but is also biased towards the stereotype. Imagine a candidate for a top neuroscience job who had a career break for a year, but who has also completed a research visit at a prestigious university. If the person reading their CV [resume] knows that this applicant is female, they are more likely to take note of her career break because it fits with a stereotype of women being less vocationally committed; the research visit receives no such added memorability. The accurate belief that women are less likely to be scientists has the epistemic cost of distorting thinking about individuals to fit wider stereotypes of women. Moreover, this distortion of memory is not outweighed by the increase in remembered information. It would be better and fairer to remember less information, but for that information to be unbiased against the candidate.

A second epistemic cost of stereotyping is misinterpretation of ambiguous evidence. When some characteristics of an individual are known and have a stereotype attached, implicit bias can lead to misinterpretation of the evidence, even if the stereotype to some extent reflects social realities. So for example, if a female scientist makes some errors in an important presentation, this evidence is ambiguous: it is consistent with a lack of knowledge, but also with a lack of confidence in public speaking. Yet someone with the (accurate) belief that most scientists are male may also carry the implicit bias that scientific expertise is to be associated with men and so (perhaps inaccurately) interpret the errors as the result of a lack of knowledge. Since the majority of scientists are men, the stereotypical belief does reflect social reality to an extent, but has greater epistemic costs in that this behavioural evidence is misinterpreted.

Two further, related, epistemic costs of stereotyping are failure to notice differences between individuals and failure to notice similarities between members of different groups. When a stereotype is being employed, the people stereotyped are seen as group members, and minority groups are seen as less diverse and more likely to share characteristics than a majority group (see paper by Bartsch & Judd in European Journal of Social Psychology 23, 1993). So, female scientists will be seen to be more homogenous than their male majority counterparts. This is costly in terms of knowledge because it causes details about individuals, which could affect important judgements, to go unnoticed. Additionally, similarities between groups are less likely to be noticed. For example when a scientist is a woman, any sign of lack of commitment is spotted, but similar signs may be overlooked if displayed by her male colleagues. Once again this is an epistemically costly omission of factual information.

The fourth epistemic cost identified by Puddifoot is failure to truth-track in explanations of behaviour. When implicit bias comes into effect, people may use the group membership of an individual as an explanation of their behaviour if it fits the stereotype, neglecting other possible explanations and relevant information. In particular, an act is explained in terms of the nature of the agent if the act fits the stereotype, and in terms of the situation if it does not. Let us go back to the example of the female scientist who makes mistakes in her speech. The stereotype concerning scientific expertise being a male trait rather than female (whilst reflecting some aspects of social reality) means that her errors are explained in terms of her capabilities, even if other explanations would be equally adequate, if not better. Yet when a male is observed to make the same errors the behaviour is more likely to be explained through the situation – perhaps there was something distracting him – although in fact he may have simply lacked the knowledge. However, the latter explanation does not fit the stereotype of male scientific competence.

The fifth and final epistemic cost of stereotyping is inappropriate associations and cognitive depletion. The epistemic benefit of stereotyping is that assumptions are made that accurately reflect social reality. However when people stereotype they often make a host of other associations that do not reflect social reality. For example, the stereotype of a scientist as male may be associated with the belief that males are likely to have a higher IQ than females, which is not true. Or the belief that females take on a more nurturing role in the family and society (which reflects what often happens in society) may be conflated with the belief that they are disposed to be more nurturing (which arguably is not accurate). We may form bias based on superficial features that reflect a stereotype even when the majority of features of that individual do not conform to that stereotype. For example a female with typically feminine facial and bodily features may trigger a stereotype of the nurturing wife and mother even if they are not particularily nurturing.

Of course, we can work to suppress automatic bias, but this takes effort that can deplete cognitive resources and therefore has epistemic costs itself. Given these costs of even accurate stereotyping, there appears to be an epistemic benefit to having non-stereotypical beliefs that produce egalitarian responses, even if these fail to fully reflect social realities. The strategies for avoiding implicit bias deserve more space than I have here, but broadly speaking, either the relevant social information can be withheld so that it cannot affect thinking, or someone can actively misrepresent social reality; for example, responding as if men and women were equally represented in the sciences. Both methods would lead to an egalitarian response, which is ethically sound, and also avoids the epistemic costs outlined above. By actively cultivating imperfect thinking we can avoid misremembering details, misinterpreting ambiguous evidence, failing to notice relevant similarities or differences, failing to truth-track in terms of explanations of behaviour, and making inaccurate associations. So by shunning even generally correct stereotypes we can actually increase our chance of holding true beliefs.

As in the case of delusions, for thinking that fails to accurately reflect social realities to be epistemically innocent, there must also be no alternative conferring the same epistemic benefits without costs. Research does show however that if people are sensitive to social differences and so have beliefs that accurately represent reality, they are highly likely to engage in automatic stereotyping, with the costs I’ve highlighted. It appears we cannot have our epistemic cake and eat it.

Puddifoot argues that holding beliefs that fail to accurately represent social reality can be the lesser of two evils here. Stereotypes that reflect at least some aspect of social reality sometimes lead us to make an accurate assessment, for example that a randomly selected scientist will probably be male. However, by avoiding such stereotypes we will also avoid their pitfalls.

Summary

These are just two examples where imperfect or inaccurate beliefs have redeeming features. In the first example (delusions), inaccurately interpreting information to fit pre-existing negative schemata reduces anxiety and psychological discomfort and contributes to a coherent sense of self. In the second (stereotypes) there is sometimes more to be gained epistemically from holding beliefs that do not accurately represent social reality. And so, in the absence of equally effective alternatives at no epistemic cost, both cases are said to be epistemically innocent.

The research at Project PERFECT has important philosophical implications. In a discipline where truth is revered above all else, this research forces us to reassess how we understand the relative value of truth and falsity. But there are also implications for how we understand mental health, forcing us to reassess the culturally-constructed boundaries between normal and abnormal, or healthy and unhealthy, thinking.

© Sally Latham 2018

Sally Latham is a Philosophy lecturer at Birmingham Metropolitan College.

• For more on Project PERFECT, please visit http://projectperfect.eu.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X