Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Interview
Igor Aleksander
Igor Aleksander is a leading researcher on machine consciousness. Julian Moore asked him about brains and language, self-awareness and robot rights.
Igor, you are the Gabor Professor of Neural Systems Engineering at Imperial College here in London, but your work started back in the middle sixties on individual neurons. Did you ever imagine that you’d one day end up discussing machine consciousness as a practical possibility?
No, I definitely didn’t. I was always against the idea of taking something like consciousness, which seemed to do with humans and biology, and talking about it technologically. But on one occasion, and in fact when I was being interviewed, someone said “This system that you have, which creates internal representations of an outside world, you could say it’s conscious.” So I said, “No, that’s absolute rubbish.” But it got me thinking about it, and indeed from then on, which must have been about 1990, I started thinking about whether by simulating on a computer what people think are the mechanisms of consciousness, one could actually arrive at some sort of artificial consciousness. But it came as a surprise.
Was that through your work with Magnus?
Yes, definitely.
Can you tell us a little bit about what Magnus is and how it works, in particular this concept that you have of ‘iconic learning’, which seems to be the underlying principle, and actually supports many of the arguments that you have philosophically.
The work that I’ve been doing over the years has been with neural networks. That means building a technological system which is capable of learning, that uses little cells which have a vague relationship to the neurons we have in our brains. And all the early work that we did, in line with what everybody else was doing, was to create layers of these cells which were capable of learning to recognize images. The system could learn to associate images with labels so that if you showed it a picture of an elephant, the label ‘elephant’ would come out. Now, what bothered me was that you could never go back to this system and ask it “What does an elephant look like?”
It was a one-way process, a translation from a visual representation to some form of label.
Exactly. Whereas what we do in our heads is always a two-way process. We’re constantly aware of the words and phrases that go with things, of the way things look, and what they feel like out there in the world. So Magnus was the step that we took in going from these layered labelling systems to systems that had an inner visual representation of what it is that we’re looking at, as well as what it is that thing is called. That meant using neural networks with feedback, so that these inner representations, these ‘pictures in its head’, if you like, could be sustained and triggered by incoming words.
That was Magnus. Basically it was a very flexible system that allowed us to build these systems with feedback, which we call dynamic neural systems, and also to build many such modules and stick them together. That then enabled us to build systems that would name things in the world, that would know where to find things in the world, and so on.
And if presented with the label ‘banana’ would form an image, some sort of representation internally, of a banana?
The very simple phrases that we could use, just adjectival phrases: noun, adjective, noun, adjective, that sort of thing, it would start building up an imagination of these things if it had seen them.
That’s a very strong word, ‘imagination’; it falls outside the traditional boundaries of science and engineering. Coming from a strongly engineering background you now find yourself immersed in one of the longest-running and perhaps most public philosophical debates, that of consciousness itself. What has it been like entering this arena, and where do you feel the boundaries lie between philosophy, science and engineering?
Yes, I did feel at some point that I was starting to do battle, of sorts, with a different group of people from the engineers. I was trying to persuade people that we were building systems with cognitive properties over and above the kind possessed by systems that artificial intelligence people had built up to that point. But once I started talking about visual awareness and about consciousness, I felt that I was dealing with the biggies. I discovered that nobody actually had any sympathy for anything that I said, mainly because I think there is a feeling abroad that anyone who tries to approach the problem from some sort of principled basis in science, mathematics or engineering is probably on the wrong track.
Did you get the feeling that there are large red signs up saying “No entry – authorised thinkers only”?
Absolutely, and the engineer was right at the end of the queue! The worrying thing is that you can sometimes get really insulting comments on what you are trying to deliver, like “How dare you write a paper which has the word ‘consciousness’ in it when you don’t have the first idea about what the word means!”
Well, I’d like to meet somebody who could actually give a satisfactory definition of the word consciousness.
I think there is a difference between having a definition of the word consciousness and having an idea of what the word means. I’d certainly done a lot of reading by then, on the history of philosophy of consciousness, and philosophy of mind. I do try to find out what it is that people are talking about.
Was it just a problem of terminology separating the two approaches to the same subject, or was it that the philosophical community felt that you were heading down pathways that had already been well explored, maybe already been found to be dead ends?
I didn’t feel that people thought I was raking up things that were already known, but that I was talking about a form of consciousness that had been rejected by the odd individual in the consciousness community. You have as many ‘isms’ and ‘ists’ in this world as you can possibly imagine. Somebody might say “You are outrageously reductionist,” and that would be seen as a total denial of anything that I’d done. I’d reply that in science reductionism isn’t necessarily a bad thing, if what it means is taking something that is pretty complicated and saying something a little simpler about it.
Of course, in philosophy ‘reductionism’ has a slightly more specific meaning than it would have in science in general. It does mean to some that you think that the conscious experience is the same as the underpinning activity of the mechanism that is producing it. Now I would certainly never say that. Obviously the two are not the same, because they are talked about in different ways. But I would argue that one can talk about the two without necessarily believing that they have no contact with one another.
In discussing Magnus’ ability to associate bidirectionally, it appears that Magnus has, albeit very simple, beliefs. If you say ‘banana’, you can see something forming inside the screen, a picture of what’s happening, some sort of image of a banana. In the old sense in which Brentano used the word ‘intentionality’ [= “aboutness”] it would seem that Magnus has this type of intentionality. And if, as he said, intentionality is the mark of the mental, it would follow that Magnus has mental states. Would you go as far as to say that?
I think one has to be very careful when talking about Magnus, for purely technical reasons. Magnus is just a box of software tricks. Magnus has no entity. It’s just a way of doing experiments. But Magnus has the ability to have internal states related to the world out there and this has something to do with intentionality. But the machines that might really tell us something about consciousness are not just a single Magnus module; they are things that begin to look a lot more like a brain. In other words they may have many Magnus modules interacting with one another where several elements of aboutness or intentionality might come up in different modules.
So far we’ve talked about one module with an internal ‘mental’ state. From knowing about these things with internal states that represent the world, which I call iconic automata, one begins to see that where intentionality might be in a technical system is in what engineers and scientists call the state space of this automaton. At any point in time the machine would have one internal state which could be looked at as a possible mental state. Now the ‘aboutness’ of that state has to do with other states in the same system.
The states which are in some sense nearby and readily accessed from the current position?
Yes, from the current position and by the conditions of the input to this system. Classical AIpeople try to get aboutness into the system by creating a network of labels. So ‘a cat’ might be the label ‘c’. ‘A cat has 4 paws’ is an arrow that points from ‘c’ to another label that says ‘4p’, and the arrow is labelled ‘has’. And from this you can build up everything you know about the cat. This is what John Searle objected to. He said that though you may be building up a database which is a symbolic repository of utterances about cats, the database doesn’t know about cats. But when you start working with Magnus with these rich representations where it can visualise a cat, it can visualise the four paws, visualise it in a world etc, all of this can be done within these dynamic neural networks. So I think that comes close to intentionality.
Iconic representation seems to involve the close association of ideas or representations from different sensory modalities in the same state system. There is that sense of ‘aboutness’ arising from those associations. As you say, that’s what it would be like to know a cat in this sense of a more developed machine. There is a sense of meaning arising there. If we were to ask about meaning per se, in terms of Frege’s work and so forth, what has your work contributed to understanding meaning, of what it means to mean?
Unfortunately, I don’t buy some of these traditional philosophical sequences. The meaning of meaning, second level of consciousness, all that sort of stuff. When you look at things from this Magnus point of view you realise that what is going on with these adaptive iconic systems has something to do with intentionality, something to do with meaning. For me it’s a clarification of those concepts which may not stand too easily with the way these concepts have been clarified in the previous history of philosophy.
If we look ahead to where we’ll be in two hundred year’s time, I believe there is going to be a much simpler and clearer view of concepts such as meaning. It will have to do with the internal mental states and the selection of possible decisions and what to do about those mental states and what to say about them. All of those characteristics have a technical character which we are beginning to see in computer simulations.
Are you saying that meaning, if you think of it in terms of the system, is less of a fixed point but a more dynamic idea? I’m thinking largely of the ideas of Quine on the indeterminacy of radical translation and of Nagel’s bat. We can’t translate accurately because individual words in another language have particular associations. We can’t know what it’s like to be a bat because we don’t have a high pitched voice or really big ears.
What I’ve said actually applies to all of those things. The very simple view that I have is that it’s obvious that I can never know what it’s like to be a bat because I’m not a bat, though I do know what its like to be Igor. What is important is to know that the bat has a certain set of mechanisms that enable it to know what it’s like to be a bat.
If I create on a computer a virtual creature that looks like a bat and I know all the rules that go into this bat flapping around the screen, and you say “I wonder what it’s like to be that bat,” I’ll say “That’s a stupid question because that bat doesn’t have the wherewithal for itself to know what it’s like to be a bat. It’s just a program that runs on the screen.”
I sometimes have to defend very strongly some of the systems that I develop, that they’re not just programs that flap on the screen, that they have the mechanistic wherewithal to know what it’s like to be whatever it is that they are.
But how do you answer Searle’s criticism, that a simulation of a thing is not the thing itself? When you are simulating the operation of a collection of the modules, you may have a simulation of intentionality or possibly some machine consciousness, but it’s never going to be consciousness as such.
There are a lot of difficult issues here. If I have a proper simulation which has the mechanisms that are involved in generating consciousness, then that simulation only has an existence inside the computer. The only way it can be conscious of anything is of something else that is inside a computer. In other words, the whole thing is virtual. It’s quite proper and right to say that it’s not conscious like a human being, because a human being is conscious of real palpable things.
But I can take – this is what we are doing now – I can take that virtual system which is only conscious in some artificial, virtual way, which can only deal with things that are in a computer, and put it in a physical robot. That robot starts having an interaction with the real world. The inner mental states, the depictions, the meanings of the same program I was using in a virtual sense now starts having meaning and representations of the real world. I would say that it’s still the same system, still the same software, but it actually represents the working of a brain-like thing.
Because it has a direct connection with the real world.
And it acts in the real world. And when you watch it, the first thing you do is say “This thing seems to know what it wants and where it’s going. It seems to be visually conscious of the world.”
But how much of that is our natural tendency to anthropomorphise?
I think if I built a system which wasn’t brain-like at all, but was just very cleverly crafted through normal programming, there would be a tendency to still say that it’s conscious of the world. In other words I could create a system that was totally unconscious and stupid but would behave like a conscious object. This is why it’s crucial to know what mechanisms are driving a particular entity in order to define consciousness and decide whether that entity is conscious or not.
Since you have based your work, to a certain extent, upon what we know about brain structure, would you claim for whatever degree of consciousness your system might have that it is the same type as ours?
Of the same type is hard to talk about, but I would certainly say it has principles in common. That’s very important because one shouldn’t lose sight of the difference between some artificially conscious object and a living biological object.
What is the essence of that difference?
Putting the mechanics aside, the fact that being biological and being evolved the living object has certain needs that are built into its structure; it has certain predispositions to the way it becomes conscious of the world. Those are all geared towards the survival of this very complex system we call a human brain. It just so happens that we have these very successful and complex creatures that have evolved, that we call human beings, that are distinguished by the fact that they use language for example. All the emotional structure, their needs, their character, their personalities are all dependant on their biological nature and the fact that they have a conscious mechanism. I’m interested in the conscious mechanism. I can find applications for those conscious mechanisms in a different non-biological domain, but then we have a beast which is very different, that is not going to come out and start competing with a human being for biological resources.
It might compete or might have views about us for other reasons though.
Well it might, but there are good reasons for it having a technological sort of consciousness rather than a biological one. One is that it makes for a better machine. If they want to put a conscious robot explorer on Mars it will need to be afraid; it will need to be pleased with what it’s doing; it will need to be very conscious of the visual world it’s in and be conscious of itself, of what it can do and what it can’t do. It’s got to be as much like an astronaut as we can get it, but it must not at the end worry whether you bring it back or not.
You have made a point in the past of trying to reassure people there is no great Frankenstein/Terminator scenario in which these AIs will get so fed up with us that they decide that the best thing to do is eliminate us. If I’ve understood you correctly you’ve said that this is a machine consciousness, that it doesn’t have the same biological needs, that it’s not going to compete with us and so forth. But on the one hand you can say that it’s like us in some respects, you’ve spoken of it being based upon some understanding of the brain and of it “being afraid”, in which case why shouldn’t the human behaviours of loathing also arise? On the other hand, if you say its completely unlike human consciousness then what basis do we have for predicting the emotional states it might have at all? Surely the possibility’s out there somewhere?
I think there is a possibility, but it’s a bit like the possibility of a very complex control system that you use to fly aeroplanes suddenly going wrong. There may be some common principles between a living organism and an artificial organism, but if I have understood the principles whereby the artificial organism would have something we could call consciousness in order to drive its actions, we then need to distinguish between those principles and the actions that it actually takes. What I’m saying is that the actions that it takes may be limited and very controllable by its designers.
What is exciting about having an entity like that is that by saying it has some common principles I begin to understand what it is like for a human being to want things, desire things and need things which my robot doesn’t actually do or want. I use it as a vehicle for me to think about the mechanism of how it happens in the human being.
You’ve already mentioned language; what does the work so far have to say about the great debate between the Chomskians and developmental thinkers?
It says that Chomsky is largely wrong, but not entirely, and the developmental people are heading in the right direction but haven’t got the answer.
And the answer is?
I don’t have the answer either. I have a feeling for where the answer might lie. What I plan to do for the next few years is to really concentrate on human-type consciousness and its relationship to language. What seems to be happening in the human brain is that, along with those mental states that give us a knowledge of the world, we have a parallel set of mental states which have to do with language. These are well distinguished in the brain. There’s loads of interconnection between the area that is very linguistic and the area that is very visual. It’s an emergent property that links the two together. By emergent property I mean there is an evolved set of connections between the two that give us the tremendous feeling of always being able to verbalise our thoughts and being able to visualise, or have some sort of sensory representation, of things that we hear in language. That’s a very beautiful mechanism that can be translated over to artificial automata, which still needs to be studied.
But assuming the work advances over a few more years are we looking forward to a time when one of these machines will actually be able to say “I can see an apple,” and it means that the ‘I’ refers to itself having self-awareness as well as simply consciousness?
Self is in there right from the beginning. If you imagine that I’m a robot and I’m sitting here looking around discovering objects that I’m interested in, one of the things that I have to do is to find out if an object out there is occluded. I’ve got to take a step to the left in order to see it better. In very simple terms that’s what the robot can do now. It can say “I’m going to go to the left to see the object properly.” Therefore, the concept of a self is something that comes out of looking around and having a representation, which we call a depiction. The depiction has to do with knowing where things are in the world with respect to yourself and this gives representation a built-in sense of self.
It is more than a set of shadows on the wall, it is actually a virtual reality model of the world inside this mechanism?
Yes, and it’s a very sophisticated model. It’s a mixture of memory and perceptual awareness which is not only something we are very familiar with but it actually captures the nature of the ‘self’.
It sounds to me that with these ingredients: consciousness, self awareness and, possibly in due course, some language abilities, we would have something that we could have sensible conversations with and interact with to the extent that one would be quite tempted to call this a sentient being. What rights would it have, and if none, why should its consciousness be classified any differently from our own?
Well, because rights have to do with the rights to life. The life of an artificial organism may have characteristics and definitions that are completely different to the life of an evolved biological being, or to the life of a rock, or to the life of a plant.
Its value basis may be very different, but that doesn’t necessarily mean it can be dismissed. If, for whatever reason, it said “Please don’t do that,” why should I treat it differently in that respect to all the other respects in which I’m willing to accept it as being what it is?
Because I can’t make the equation that why it doesn’t want to be switched off is in any way anything to do with why I don’t want to be switched off. That’s the kind of facile link that one normally makes. We say “There’s a conscious thing. It’s going to feel about its life like I do about my life,” but it doesn’t. Let’s take this business of death. Death for a human being is the ultimate let down, the ultimate failure of one’s mission because one’s evolutionary mission is to survive, to procreate and to let the species continue. Now, if I build a robot according to Asimov’s laws that a robot must always be subservient to a human being, I think that robot will have some sense of mission even if it is just an amusing toy whose only purpose is to have an interesting conversation with you. You could ask it what it’s like to be a robot and it will say “It’s nothing like being a human being, because I don’t have to worry about death. I just have to worry about whether I have a mission or not. My mission now is to talk to you so I’m fine, and if you want to switch me off that’s perfectly okay with me. If you never want to switch me on again, fine. My mission will have been accomplished. I’ve done my job. Why should it worry me?”
Do you think we will live long enough to actually see this come about?
I think we are closer to something like that than we think, but it’s also not going to be as marvellous as it appears. It may mean something quite commonplace: an object that you can have a conversation with that tells you what it’s like to be an object which is a different point of view from talking to a human.
So we will become gradually accustomed to more and more sophisticated artificial minds of some description, and will that also slowly pull the rug from underneath philosophers so that in 40 years’ time there will not be these grand debates about qualia, consciousness, self-awareness and language? Will we now be looking to examine the internal workings and say “Well, there you go, that’s what it is”?
I think that’s right but I don’t see it as a process of pulling the rug from under the feet of philosophers! Philosophers have a neat way of deciding that something isn’t worth talking about anymore, “It’s become boring, let’s talk about something more exciting.” I think that one of two things is going to happen. The scientific and engineering world will eventually say “Well, we think we understand what consciousness is all about, we have made some artifacts which as far as we are concerned tell us not only that they are conscious, but tell us all we need to know about our own consciousness. As far as we are concerned that’s the end of the story.” Philosophers will say “You still haven’t solved the hard problem of consciousness,” but eventually they might get bored with saying that.
Professor Aleksander, thank you very much, that’s been most illuminating. You can turn me off now.
Click.
[Julian Moore has a degree in physics from Bristol University and works in I.T., but considers himself primarily a poet and philosopher.]