Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Articles
Good News from Neurology – But Don’t Get The Wrong Idea
Francis Fallon thinks about the difficulty of deciphering thought in the brain.
In November, news broke concerning an exciting development in neurology. Via fMRI (a brain scanning technology), a vegetative patient , who suffered serious brain damage in a car collision years ago communicated to doctors that he is not in pain. Scott Routley’s vegetative state meant he had emerged from a coma, and appeared to be awake, but he showed no signs of awareness. However, a new technique pioneered by Prof. Adrian Owen and others, at the MRC Cognition and Brain Sciences Unit at Cambridge and the Brain and Mind Institute at the University of Western Ontario, has allowed Routley to convey significant information.
Before analysing these findings, let’s not forget what’s most important here: Scott Routley is not in pain, and Prof. Owen’s technique will allow more vegetative patients to help doctors care for them. I won’t call any of this into question. However, the reporting of Routley’s communication has, perhaps inevitably, taken a misleading form, and stands in need of a bit of philosophical clarification to dispel the seductive notion that neurologists can discern specific thoughts by examining brain states.
Casual followers of Routley’s story can be forgiven for a number of misapprehensions. News papers have run headlines announcing that Routley has said “I’m not in pain.” Only in the loosest sense is this true. Of course, Routley does not have the ability to vocalise his thoughts, but this is not the point . Rather, Routley’s communication involved no vocabulary or syntax at all. Instead, Routley was instructed to think about playing tennis when he wanted to convey ‘no’, and to think about walking around his house when he wanted to convey ‘yes’. This distinction is relevant for understanding the nature of the achievement. Patients such as Routley can only answer questions with a very limited number of responses. Happily, when Prof. Owen asked Routley if he was in pain, the fMRI scan matched earlier instances of Routley thinking about playing tennis: the part of the brain typically involved in such thought, the supplementary motor area, was shown by the scan as being active. This result corresponded to a 'no' response.
The distinction between saying and indicating is relevant for understanding the nature of how brain activity relates to thought and language. Some philosophers and cognitive scientists believe that all human thought comes in the form of mentalese, an internal mental language, which consists of explicit structures that bear linguistic-type meaning. (Jerry Fodor, for example, has pioneered this position.) Others question the necessity for representational systems of thought to have linguistic properties. Languages themselves rely on beliefs that do not have any explicit structure: brains do not code most trivial beliefs, such as ‘There are more than four-hundred people in the world’, yet still we speak meaningfully. So, perhaps meaningful thought takes place without explicit coding of everything that makes the belief meaningful. If one were under the misapprehension that doctors read complete propositional thoughts – sentences such as ‘I am not in pain’ – into Routley’s neural activities, this would however favour the presupposition that thoughts take place in explicit mentalese. The connection between the brain and thought is not so straightforward.
Not Reading Your Mind
A thought experiment used by neuroscientists and philosophers illustrates how even the most comprehensive knowledge of a brain would not translate to an understanding of that brain’s thought. Imagine a ‘cerebroscope’, a device capable of reading all neural activity, both at the level of the neuron, and at the level of systematic groupings of neuronal activity. Unfortunately, if we try to imagine a device that could also then translate the cerebroscope’s data back into what is being experienced, faithfully reporting the experience of, for example, an oncoming red bus, our fantasy runs into problems. For a start, contingent factors influence the associations of neurons, so that one person’s coding for the image of a bus will be not be another person’s. It is true that different brain regions specialise in different things; it for this reason that doctors have been able to treat Routley’s communication as genuine. Even this regionalisation, however, only holds contingently. Damaged brains can rewire themselves dramatically, resulting in an organisation radically different from normal brains. Function is not tied to a particular brain structure. It follows that any given thought has multiple possible structural realisations. For example, the thought ‘I am reading this article’ will have one physical instantiation in your brain, and another, perhaps quite different, in someone else’s. So how can we translate from data to experience?
What about a device that could read our neural activity from birth? It might seem that this would suffice for providing for the translation of its data into thought, but this does not follow. A cerebroscope that read all neural activity from birth onwards could report the activation of structural systems that, for example, enable thought about an oncoming red bus. It could not however, convey the content of that thought, which depends on connections and associations based on an inherently idiosyncratic encounter with the world.
Sticking with the example of the oncoming red bus, imagine that the cerebroscope has been implanted in the brain of a busy London pedestrian. She regularly experiences looming buses, and she consistently reacts with a reflex to navigate to safety without inconveniencing others. Let us say for the sake of argument that her experience of the bus always correlates to Brain State X, and her response always correlates to Brain State Y. Even granting this oversimplification, the cerebroscope only measures brain states, and therefore never infers beyond them to their content. So in our idealised example, the best the cerebroscope could do would be to give reports taking the form ‘Brain State X tends to be followed by Brain State Y’. But the nature of the experience of looming buses associated with Brain State X is a matter of contingency, an historical fact totally unavailable to the cerebroscope. For example, if the pedestrian always had a particular fear response before moving to safety (which includes a Brain State F occurring at the same time as X) the cerebroscope could not distinguish the two types of experience according to content. Even given the most futuristic science, this kind of ‘mind reading’ remains impossible.
Movies In The Mind?
Still picture from Fantastic Planet © Argos Films, 1973
Another moment of scientific progress has received misleading press in much the same way as Routley’s communication. In September 2011, Prof. Jack Gallant of UC Berkeley ran a startling and impressive experiment. The steps proceeded as follows: (1) Subjects watched movie trailers while an fMRI recorded their brains’ responses. (2) A computer organised the findings, creating a model for each subject’s brain responses to images. (They vary.) (3) Subjects watched more movie trailers. (4) The computer was given the fMRI results of the latest viewings. (5) From these fMRI results the computer reconstructed the images that subjects had seen using a database of footage drawn from the internet. The images match, if a little crudely. Headlines announced this as ‘Mind-Reading’ and ‘Looking Inside The Brain’; articles reported it as ‘Recreating Images In The Brain’. Gallant himself discussed ‘internal imagery’ and ‘movies within the mind’ with interviewers. Once again, though, this language assumes things that philosophers and cognitive scientists question. The notion of mind-reading implicitly relies on mentalese, which we have seen is controversial. Even the commonplace phrase ‘mental imagery’ becomes controversial under scrutiny. Some philosophers and cognitive scientists cite our inability to report details from our mental images as evidence of their metaphorical rather than their literal existence. To treat images as simply inside the brain presumes a viewer inside the brain. The idea of a movie-in-the-mind poses the same problem. Both expressions evoke a ‘Cartesian theatre’ – a place in the brain where images flow past a homunculus (a little man) who watches them. This account is famously problematic, largely because it seems to lead to infinite regress. (If the little man or his equivalent in your brain sees an image, then to account for him seeing that image, an even littler man in his brain would have to see it, and so on.) Nothing in Gallant’s experiment says anything about any of these debates. Instead, the experiment finds and exploits a statistical regularity in the individual’s primary visual cortex’s responses to types of visual stimuli. Perhaps not the stuff of headlines, but at least this description has accuracy on its side.
Lessons For Thinkers
These distinctions may strike some as needlessly theoretical, but such a criticism cannot attach to evaluating Routley’s mind. Owen and his colleagues have taken care to include controls based on previous work. Only patients who respond to the instruction ‘think of tennis’ differently from the way they respond to the neutral ‘she played tennis’ are taken to show genuine responsiveness. Routley’s communication most likely is not the product of chance, but this does not necessarily mean that he is conscious in the normal sense of the word (whatever that may be). Owen claims that Routley chooses to answer the questions, for example, and that he knows who and where he is, but the evidence for this is indirect. Even if we grant this, it does not tell us about the richness of Routley’s experience. If consciousness were an all-or-nothing affair, then to interpret the fMRIs of patients like Routley as evidence of consciousness would indeed be simplest and best, as Owen claims. But consciousness may not be an all-or-nothing affair. People in trances, sleepwalkers, and the heavily medicated, can respond to linguistic prompts without enjoying full consciousness. The fact of Routley’s communication does not support the assumption of his conscious awareness in the normal sense.
The positive lessons to take from exciting advances in neurosciences are often clear enough. Scott Routley has communicated to doctors via a new fMRI technique that he is not in pain, and he may be able to communicate still more information. The negative lessons are more difficult. Routley’s communication did not rely on his use of language, nor on reading his neural activity as a form of language. Moreover, the kind of communication involved in Routley’s case does not necessarily even indicate full awareness. Consciousness does not work so simply. The lesson here is that we should not let learning about developments in the field of neurology stand in the way of our understanding the relationship between thought and the brain.
© Francis Fallon 2013
Francis Fallon was at Lancaster University at the time of writing.