Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Mind & Self
Thought, Consciousness, Brains and Machines
Adrian Brockless on the proper way to use the words ‘thought’ and ‘consciousness’.
The concept of thought is one with which people make mischief. Is thought to do with what it is like to have experiences? How can we know for sure that others think as we do? In what ways are thought and consciousness related? Can an unconscious entity think? Perhaps thought emerges from ever increasing degrees of computational sophistication? Yet there are machines that are capable of far more complex computational manoeuvres than human beings, but to which we do not accord thought.
A Solid Grounding in Thought
Both neuroscience and artificial intelligence require a clear understanding of the concepts of thought and consciousness. If they do not have this then, by definition, one cannot be clear about what such investigations reveal. But can philosophy really inform the kinds of scientific investigations now taking place in AI and neuroscience? Many believe not. Nevertheless, our understanding of concepts and how they operate is a philosophical matter, not a scientific one. Scientific discoveries can make a concept redundant but they cannot falsify the concept itself, that is, show that the concept itself doesn’t make sense. To take a well-worn example, the concept of ‘phlogiston’ was once associated with the process of burning. Phlogiston was believed to be a substance in any object that could burn. The existence of phlogiston in flammable objects has long since been disproved and we now have much clearer ideas about how combustion takes place. But properly speaking, the concept itself is neither true nor false, merely redundant – unless one is writing a treatise on the history of science (or a philosophical article about concepts).
Here’s another example of a conceptual question that is philosophical as opposed to scientific. I’ve adapted this from one supplied by Ludwig Wittgenstein (1889-1951) in his Blue and Brown Books. Imagine that the electron microscope has just been invented. You decide to train it on part of the table in front of you – an object you’ve always considered to be solid. To your great surprise you discover that the table is made up of atoms with gaps in-between. The dense nuclei of the atoms make up only a tiny proportion of the table’s volume and are separated by relatively vast spaces that are essentially empty. Have you disproved the concept of solidity? Or have you merely discovered something more about what it means for something to be solid?
How you answer this question will, of course, determine what (if anything) you count as a solid object; but there is nothing in science that can help you say what a correct concept of ‘solidity’ should be.
So after the discovery that solid objects comprise atoms which are actually vibrating, with gaps in-between, what should we make of the concept of solidity? Unlike the concept ‘phlogiston’, the concept of solidity has not been made redundant. But has it been proved true ? No more so than the concept of phlogiston has been proved false.
Firstly, if one argues that the discovery that objects are made of atoms with gaps in-between means that the concept of solidity has been proved false, the scientific explanation becomes confused in the sense that this would mean that the phenomenon being explained – the solidity of everyday objects – is not solidity at all! This might be acceptable if solidity was something about which we merely speculated, but that is not so. We put our mugs and pens down on solid surfaces, and we contrast solid objects with stuff such as water and air which are not solid. The concept of solidity has uncontroversial everyday uses. So, far from proving the concept of solidity to be false or redundant, the scientific discoveries instead tell us more about the nature of solid objects. To claim that the concept of solidity has been proved false by scientific discoveries is therefore poor philosophy. It is poor philosophy because the purpose of the scientific investigation is to explain how it comes to be that objects are solid, not whether the concept of solidity is itself true, false, or non-existent. (And it cannot be false because it is a concept we use everyday perfectly meaningfully, and indeed, could not do without.) It is also poor philosophy because it wrongly suggests that scientific investigations explaining the nature of solidity have implications for our everyday applications of the concept. They don’t. This is why what we count as solidity, and how we do so, is a philosophical question as opposed to a scientific one.
The same holds for the concepts of thought and consciousness. The meanings of these concepts cannot be determined by empirical discoveries (although, of course, they can be extended by them). As I will argue, neither can advances in technology themselves determine whether or not a machine thinks.
Thoughtful Uses of ‘Thought’
We have always used terminology associated with thought metaphorically. It might be said of a book, if it falls repeatedly from a bookshelf, that ‘it has a mind of its own’; or a car might be called ‘grumpy’ if it refuses to start on a cold winter morning. These are unproblematic metaphorical uses of terminology associated with thought. No one is thereby suggesting that books really do have minds or that cars really are grumpy. But in recent years there has arisen a tendency in the scientific and philosophical communities to apply such metaphors in some areas in a literal way, perhaps forgetting that they are metaphors. Some claim that it is literally one’s brain that thinks, infers, hypothesizes and so on, as opposed to saying ‘I think’, ‘I infer’, ‘I hypothesise’, etc, as a result of my brain’s activities. Is this a merely question of semantics? I think not, for the following reasons.
We know what ‘to think’ and ‘to infer’ mean through our uses of them in everyday contexts. That’s how we master using these terms, and how they’re taught to us. The same is true in relation to the concept of solidity. But to say ‘my brain thinks’ or ‘my brain infers’ does not mean anything in our everyday discourse. I do not ordinarily say, for example, that ‘my brain is thinking’ when considering a complex mathematical problem.
This is important because our normal language use provides the criteria for legitimate ascriptions of what does and doesn’t count as thought. These allow for the development of investigations aimed at telling us more about thought and the brain. If when devising our research we violate the grammar of such concepts – if we use the terms ‘think’ or ‘consciousness’ without our use being anchored in the ways in which these words are normally used – then whatever we discover will be compromised by our confused uses of those terms. We might for instance then claim that something thinks when, in fact, it does not – as when computer scientists claim that machines think and neuroscientists that brains think. This is similar to the claim that there are no solid objects because it has been discovered that solid objects are comprised of moving atoms and gaps, in the sense that it involves a confusion over the use or application of concepts.
I am certainly not saying that brains have nothing to do with thought. Obviously, without brains there could be no thought, no sight, no hearing, and no consciousness. Nonetheless – to update another example from Wittgenstein – imagine if there were just live brains in vats of nutrients, with no human bodies and no human behaviour, but that these brains displayed normal human neurological activity. What would give us so much as the idea that they were thinking? The neurological activity itself? Of course not! What allows us to talk of thinking in relation to neurological activity is originally the behaviour of human beings. Such behaviour is related to the rules which govern our correct uses of the word ‘thinking’. We believe someone is thinking because their behaviour shows us that they are. Of course, the concepts of ‘brain’, ‘thought’ and ‘neurological activity’ are very much bound up together, in that brains and neurological activity are required for thought, and we associate certain kinds of thought with certain neurological patterns. But neurological activity is not identical with thought. So what, exactly, do we mean when we say that something thinks?
Recent developments in AI have led many to believe that machines can think. After all, computers can carry out countless tasks far faster than their human creators: they can solve phenomenally complex mathematics problems in a few seconds; they have beaten the world’s best human chess masters; robots can play violins, and so on. Indeed, in Flesh and Machines: How Robots Will Change Us (2003), Rodney Brooks, Professor Emeritus of Robotics at MIT, argues that his students respond to robots as they do to their human peers (Brooks fails to notice that turning off a robot when leaving the lab is not reacting to it as one would to a human being). But does this mean that such robots and computers are becoming conscious? And is it possible for thought to take place without there being consciousness?
I will return to these questions presently. For the time being, I want to explore the relationship between what machines can do and the possibility of thought.
The Rules of Thought
There have been calculating machines of one form or another for thousands of years. The abacus, for example, was invented over 2,500 years ago. But can one legitimately say that an abacus is a basic form of computer? Many would say that an abacus is not a computer, not least because it is neither electronic nor automated. However, both the abacus and electronic computers operate through rules that relate to particular kinds of results: an abacus was designed to facilitate certain forms of calculations on the basis of particular types of operations; and computers function through the operation of algorithms. But do (any) computers follow rules, or merely act in accordance with rules? I think this Wittgensteinian distinction is important when considering the issue of machine consciousness.
Computers and robots act in the ways they do because of how they have been designed and made. Whether or not their tasks are performed correctly is down to their internal mechanisms. In other words, if all the connections in the computer’s hardware have been set up correctly, its program contains no mistakes and the inputs are correct, then the output will also be correct. Outputs are causally determined by the inputs and the processes that are designed to produce the outputs. This is also, in essence, the currently popular ‘functionalist’ model of the human mind. Outputs, therefore, are an example of causal inevitability. But is this an accurate description of the nature of thought? Again, it is not uncommon to hear computer scientists claim that it is. Let’s consider this for a moment.
In Volume 3 of his phenomenally acute commentary Wittgenstein: Meaning and Mind (1993), Peter Hacker points out that one could create a computer by building a very complex miniature railway set. Points, storage depots, different kinds of carriages and trucks, would all act in ways determined by the tracks and be used at different times in different combinations depending on the tasks involved. (The tracks themselves are the rules with which the computer must act in accordance). When this model train-set computer is in operation, would one say that it’s thinking? Obviously not. Indeed, as a boy, I built a fairly complex model train set, and I can confirm that it was not thinking! Hacker goes on to point out that today’s computers are, in essence, very fast and more complex versions of this idea. Does speed make a difference in terms of whether computers think? If so, at what speed might we say that thought emerges? Remember it is the same tasks that are being performed, just more quickly. What about ever-increasing degrees of complexity? One could build a massively complex but slow train set; would that think?
But there is another and more crucial pair of distinctions that relate to rule-following: the distinctions between causal and logical determination, and between acting in accordance with a rule and following a rule.
As I’ve already mentioned, causal determination is where the outputs of a machine are causally determined by its inputs and processes. Computers have been designed to act in accordance with rules. If something goes wrong with the internal mechanism, then the outputs will no longer be in accordance with the rules it has been designed to follow.
However, what makes the outputs correct or incorrect is not the causal inevitability of the process. Rather, logical determination is the practice of following a rule which establishes what counts as the correct or incorrect output of machines that have been designed to act in accordance with rules. Put another way: the determination of the correctness of any kind of computation cannot be causal ; it must be logical. There is nothing either logical or illogical about causal determination, or acting in accordance with rules. By contrast, logical determination is the process of following a rule. So just because a machine is causally determined to replicate or emulate human behaviour this does not mean that it thinks.
Now according to Wittgenstein in Philosophical Investigations (1953), the criteria for saying one is following a rule correctly, or not, are whether one’s behaviour fits with that specified by the rule. The correct following of rules therefore obviously requires certain forms of behaviour from us. But here the rules being followed do not cause our behaviour in the way that they cause a computer’s behaviour; rather, they express an understanding of what we are doing.
Radio head old school © Woodrow Cowher 2019. Please visit woodrawspictures.com
Demonstrating Nothing Like Thought
So what does thinking amount to?
As Hacker points out, if a being can think it must also make sense to say of it that it can judge, reflect, be open-minded, dogmatic, impetuous, thoughtless, careless, sceptical, cynical, optimistic, unsure, and so on. All of these and many more traits only make sense within the context of human forms of life. In other words, the concept of thought only makes sense within a weave of life of which all these other attributes are a part. Not only that, but consciousness is presupposed in relation to something that is understood as reflective, contemplative, rash and so on. A mechanical process with a causally determined output neither establishes the rules for such forms of life nor is internally related to them
But surely one might still argue that algorithms of sufficient complexity can give rise to a machine that thinks? Surely, given sufficient computational power, a machine can be created that exhibits all the attributes I’ve just listed, and more?
Firstly, let’s suppose we agree that only conscious beings can think (whilst I am saying that consciousness is necessary for thought, I am not suggesting that thought is necessary for consciousness). If complexity gives rise to thought this implies that consciousness somehow magically emerges out of complexity in the brain. However, think about this in relation to the model railway computer again, and you’ll see that it is not just an issue of complexity. What’s more, machines already exist, such as a laptop, that can do some complex tasks far more quickly and efficiently than human beings, but which we are not even remotely inclined to class as conscious. By contrast, we think of our pet dogs and cats as conscious; but these animals are far less complex in terms of what they are able to do than many machines that already exist and to which we do not accord consciousness. It seems then that correctly applying the concept of consciousness to something does not depend on the computational complexity of that thing. And if we cannot derive consciousness from complexity then neither can we derive thought from complexity.
Secondly, a complex computer is still merely the product of the behaviour of its creator, and so it acts in accordance with rules as opposed to following them. In a similar sense to how aeroplanes and steam engines are the products of their creators, designed to make the task of travel easier and more efficient, so a computer is created with particular goals in mind – for example, to do calculations that would take human beings far longer to complete. Computers and robots are, in this respect, merely products and extensions of human behaviour. Accordingly, computers only act in accordance with rules, as opposed to following rules that are genuinely their own.
But what about computers that develop their own algorithms? Are these computers not thoughtful? Are they not conscious?
Computers that create their own algorithms will have thereby created a set of rules, perhaps ones not previously thought of by human beings. However, these rules will have been causally determined, as opposed to logically so. More to the point, we can only understand the rules that the computer has developed by reference to our existing practices. Put another way: if the computer’s algorithms came up with supposed ‘rules’ which resulted in output (that is, behaviour) that bore no resemblance to human forms of life, then they would not be rules, in the sense that they would not meet the required criteria in order to be understood by us as demonstrating rule-following.
Thirdly, the meaning and significance that the behaviour of others has for us – which may be grounded in our primitive reactions, but which also provides the criteria for saying whether or not a rule has been applied correctly – is what produces our conceptual landscape.
As demonstrated by the ‘brains in vats’ example, neurological activity in the brain is not in itself meaningful to us. Rather, we have discovered that such activity is correlated with our behaviour, which we do find meaningful; and we understand thought in terms of this meaningful behaviour. So one cannot define thought as neurological activity in the brain any more than one can define it as complexity in a computer. My brain itself cannot think. Rather, thought is known through what Wittgenstein referred to as ‘an attitude towards a soul’. He was not speaking of attitude in the way one might say that a gangsta rapper has attitude. ‘Attitude’ as Wittgenstein construes it is exemplified in the ways in which we are naturally disposed to react towards our fellow human beings, and other animals – what we might also think of as our primitive reactions. Attitude also links the concept of thought to that of consciousness insofar as it is our dispositions that determine the nature of our responses to people, animals, and machines. Our dispositions condition our conceptual landscape and so set the rule-governed criteria for the applications of our concepts. In other words, the different ways in which we are naturally disposed to respond to the world is both constitutive and expressive of how we conceive of the differences between people, animals, and machines. The correct use of the terms ‘thought’ and ‘consciousness’, therefore, is not a question of technological or neurological complexity, nor of the tasks that machines are able to perform, but rather, an aspect of the way in which we are disposed to respond to human beings as opposed to machines. Bluntly, we say that people think because we naturally respond to them as thinking beings. And at present we are unable to respond to machines as conscious beings.
Conclusion
Computers will become ever more complex, but nothing in their complexity guarantees that they think or are conscious. Only in the event that we create a machine that invites us to naturally respond to it as we do our fellow humans beings will we have created a machine that we can say thinks. But then, we will have created a person – except a person that’s made out of non-organic parts (paradoxically, that fact alone may be enough to prevent us from responding to it in ways that allow us to correctly attribute thought to it). Of course, such a machine should only perform tasks at the speed human beings can: were it to have much greater computational power or response speed, we would not respond to it as we do to our fellow creatures.
© Adrian Brockless 2019
Adrian Brockless has taught at Heythrop College, London, and at the University of Hertfordshire. He was Head of Philosophy at Sutton Grammar School from 2012 to 2015, and currently runs his own series of adult education classes in philosophy. Email: a.brockless@gmail.com.