welcome covers

Your complimentary articles

You’ve read one of your four complimentary articles for this month.

You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please

The Self

Focusing On The Brain, Ignoring the Body

Alessandro Colarossi says that Artificial Intelligence is in danger of a dead end.

French phenomenologist Maurice Merleau-Ponty (1908-1961) claimed that to understand human awareness we need to focus on the ‘lived body’ and its relationship to the world. In brief, the idea is that rather than encountering the world in the form of ‘raw sensations’, human beings see objects as representations perceived specifically though our bodies as they interact with the world. In this article I will explore Merleau-Ponty’s concept of the lived body specifically with the aim of understanding what it suggests for artificial intelligence – a discipline whose primary focus is on developing computer systems capable of performing tasks that would otherwise require the mental facility of a human being. According to Merleau-Ponty’s understanding of the lived body and the mechanisms of perception, artificial intelligence is doomed to failure for two fundamental reasons. First, a simulation cannot have the same type of meaningful interaction with the world that an embodied conscious being can have, and the absence of such interactions amounts to a fundamental absence of intelligence. Second, and perhaps more importantly, a reductionist account of the mind such as is common in artificial intelligence research simply does not paint an accurate picture of what is perceived, experienced and felt by a mind encapsulated within a lived body. Thus, artificial intelligence cannot be developed by just reverse engineering the brain, nor could it operate in a disembodied environment, as we shall see.

Merleau-Ponty’s Lived Body

Maurice Merleau-Ponty
Maurice Merleau-Ponty

The lived body is a relationship between the body and the external world by which we are capable of being both intelligent and reflective. Merleau-Ponty states that the lived body is aware of a world that contains data to be interpreted, such as immediate patterns and direct meanings. One aspect of the lived body that Merleau-Ponty analyses is the role of sense experience, beginning with the truism that our thought is a product of the body’s interaction with the world it inhabits. More specifically, he states that the subject of perception “presents itself with the world ready made, as the setting of every possible event, and treats perception as one of these events” (Phenomenology of Perception, 1962, p.240).

Merleau-Ponty begins his exploration of the concept of the lived body by reminding us that perception is the key component of our life in the world; but it’s how we perceive that is important. For him, the external world is encountered, interpreted and perceived by the body, through various forms of immersive awareness through action. For instance, colour quality is revealed to experience by a specific type of behaviour by the body, specifically the eye, that is geared toward the colour. In the eye’s case, specific colour-sensitive cells are stimulated in the retina: an interaction. With further regard to our sense experience and its relationship to the world, Merleau-Ponty writes that “the objective world being given, it is assumed that it passes on to the sense-organs messages which must be registered, then deciphered in such a way as to reproduce in us the original text” (PoP, p.7). According to Merleau-Ponty, then, there is a consistant connection between the original stimulus of the external world and our elementary perceptual experience of it.

What about our perception of others? Merleau-Ponty writes, “other consciousness can be deduced only if the emotional expressions of others are compared and identified with, and precise correlations recognized between my physical behaviour and my psychic events” (PoP, p.410). So we recognise the minds of other people by recognising our own behaviour in them. In fact, for Merleau-Ponty, the interaction with the Other allows for the development of the self. Elaborating on the connection, he writes that what “we have learned in individual perception [is] not to conceive our perspective views as independent of each other; we know that they slip into each other” (ibid).

Everybody Needs Some Body

Merleau-Ponty’s perspective is shared and reinforced by cognitive scientists such as Sandra and Matthew Blakeslee, who write that “meaning is rooted in agency (the ability to act and choose), and agency depends on embodiment. In fact, this is a hard-won lesson that the artificial intelligence community has finally begun to grasp after decades of frustration: Nothing truly intelligent is going to develop in a bodiless mainframe. In real life there is no such thing as disembodied consciousness” (The Body Has A Mind Of Its Own, 2008, p.12).

They present the following thought experiment to illustrate the importance of Merleau-Ponty’s lived body:

“If you were to carry around a young mammal such as a kitten during its critical early months of brain development, allowing it to see everything in its environment but never permitting it to move around on its own, the unlucky creature would turn out to be effectively blind for life. While it would still be able to perceive levels of light, color, and shadow – the most basic, hardwired abilities of the visual system – its depth perception and object recognition would be abysmal. Its eyes and optic nerves would be perfectly normal and intact, yet its higher visual system would be next to useless.” (pp.12-13)

Without embodied access to the environment, the cat cannot develop its nervous system with regard to proper responses to external stimuli. If correct, this suggests that the prospects for artificial intelligence in a strong sense (i.e. the creation of a computer simulation or algorithm so sophisticated that it would be conscious) are severely limited for two principle reasons.

The first reason is that artificial intelligence, if we mean the intelligence of an advanced computer simulation, does not possess the faculties needed for constructive interaction. That is, although a human being may interact with such a computer, it is not the case that the human is thereby helping the simulation progress intellectually. The popular video game The Sims illustrates what I mean by this. The player of the game constructs a small world that simulated people inhabit, who partake of a variety of different interactions with each other; appear to sleep, to eat, and to even have goals and go to work. Nevertheless, it would be perverse to argue that such a simulation could count as an actual instantiation of a world. In playing the game, it becomes quickly apparent that the little Sims are just ‘going through the motions’, and all appearance of their intentionality and goal-directed behaviour is just appearance. More specifically, there is no interaction within the game other than having the characters execute the steps with which they have been programmed. The program does not learn from any interactions with the world. Like the cat held captive, there is no chance for the characters to learn. Therefore, behind the surface of the simulation, there is nothing – no inner life, no thoughts, and no consciousness.

The second reason why artificial intelligence will never achieve consciousness is that it cannot replicate perception; And it does not and will not have the capacity for replicating this without a body that encompasses inner subjective experience. Visual experience, for example, is more than just the mechanistic process of recording photon impacts. Human beings know what it’s like to see a color like red in a context – something that simulated-intelligence algorithms cannot achieve.

Philosophers such as Patricia Churchland and Daniel C. Dennett raise objections to this line of thinking, arguing that if an intelligence has knowledge of all the physical facts, then it would thereby know what the color red is like, for example. In other words, there is nothing to conscious awareness over and above knowledge of facts and their representation in some kind of symbol-manipulating system. In responding to this view, phenomenologist Arthur Melnick says that Churchland’s and Dennett’s perspective “depends on there being a phenomenological [experiential] characterization that a physical process can get at or align itself with” (Phenomenology and the Physical Reality of Consciousness, 2011, p.108), further stating that “if what red is like is phenomenologically ineffable (has no intrinsic phenomenological characterization other than [our] having the experience and its being like that), then no matter how complete [a person’s] knowledge of physics might be, [they] cannot tell at all what red is like” (ibid). In other words, he’s claiming that one will never know about the experiential nature of something like red without actually experiencing it: a knowledge of, say, the facts about wavelengths, is not enough, because you can’t reduce what it’s like to have the experience to any sort of description of facts. Therefore the phenomenological qualities of embodied consciousness cannot be replicated in an artificial form just by programming a computer with facts.


The aim of this article is not to discredit the ever-growing field of computer science and artificial intelligence. On the contrary, researchers have made impressive breakthroughs, such as writing programs that can defeat grandmasters at chess, or developing search algorithms that allow for lighting-fast data retrieval, and other tasks useful to humanity. What I did hope to indicate, however, is that if Merleau-Ponty is right that embodiment is a key feature of developing meaningful experience, then the discipline of artificial intelligence can never hope to replicate consciousness solely through the elaboration of algorithms. We could say that since our intelligence, even our very experience, is not just a product of our brain, but is also a result of the action of our bodies in a physical world. Artificial intelligence is doomed to fail as an attempt to mimic human intelligence insofar as it lacks elements that correspond to the lived body. The first and perhaps the most significant reason that artificial intelligence is doomed to fail was illustrated as the lack of human-like interaction. Algorithms embedded in computer hardware can be so complex as to create the appearance of intelligent behaviour (as seen in the video game example) without the concomitant experiential data of true consciousness which allows human mental interaction to develop. This suggests that actual intelligence and simulated intelligence belong to fundamentally different categories. Simulated intelligence simply follows its programming, and unlike actual intelligence, does not have an inner voice. So it cannot reason, and it cannot accept meaningful (ie conscious) feedback from interaction between the world and a body.

There is much more to mimicking human intelligence than just trying to copy the physical processes of the brain. At its best, artificial intelligence could mimic the appearance of human behaviour so well that a person will not be able to tell the difference between a human and a computer. However, it will not be able to replicate the phenomenological experiences of the lived human body, and any attempt to do so will just be another simulation.

© Alessandro Colarossi 2013

Alessandro Colarossi is a web developer from Toronto. He has a BA in Philosophy from York University, and an Advanced Diploma in Systems Analysis from Sheridan College, Toronto.

This site uses cookies to recognize users and allow us to analyse site usage. By continuing to browse the site with cookies enabled in your browser, you consent to the use of cookies in accordance with our privacy policy. X