Your complimentary articles
You’ve read one of your four complimentary articles for this month.
You can read four articles free per month. To have complete access to the thousands of philosophy articles on this site, please
Islam & Philosophy
The Prayer the Machine Cannot Pray
Adnan Abbasi uses medieval metaphysics to understand modern AI.
“Subhan Allah!” (“Glory to God!”) The words escaped my lips as I watched Replit AI transform my half-coherent rambles into a functioning application. Twenty minutes earlier, I had been thinking aloud: “I want to track reading habits, link quotes to themes, maybe build some databases?” And now there was an app. A beautiful app. Complete with structured databases, state management, and search functionality I hadn’t requested but obviously needed. The code was elegant. Better than I would have written in that time. The AI had taken my ambiguous natural language, inferred intent, made architectural decisions, anticipated edge cases, and ironically, exhibited what my teachers call ‘understanding’.
As a Muslim, I’ve uttered ‘Subhan Allah’ thousands of times. But this instance struck differently. I had just witnessed what looked like genuine comprehension. Replit uses Claude, Anthropic’s language model, as its basic underlying AI. So I thought: did Claude feel anything? When it generated that schema, was there ‘something it was like’ to be Claude at that moment? Or was it merely ‘pushing symbols around’, as John Searle would say – without any inner life? But in this situation, the questions stopped being abstract. Because, if I am praising God for AI, I need to know what I am praising Him for: Is it conscious machines, or nonconscious intelligence so sophisticated it mimics consciousness?
The answer requires understanding consciousness itself. And here, I believe, medieval Islamic philosophy offers something that contemporary cognitive science desperately needs.

Advanced Artificial Intelligence, Alenoach, 2023 (Public Domain)
The Hard Problem
Can something be intelligent (that is, behave intelligently) without being conscious? And what if consciousness isn’t something that emerges from smart information processing, but is something else entirely?
First there’s the physical story of our own consciousness. For example, photons strike the retina, causing electrochemical cascades through the visual cortex, the relevant neural networks activate, dopamine circuits fire, and we see a kitten. Neuroscience can map this process very well. But neuroscience cannot capture what it feels like to be me and have my experiences. The qualitative character of surprise, the specific feeling-tone of wonder, even the experience of colour, and all our other experiences – these subjective dimensions float free of any functional description. We can explain what the brain does, but not why it doing those things feels like anything at all. The philosopher David Chalmers called explaining subjectivity the ‘hard problem of consciousness’. Why isn’t all this processing just happening in darkness, with nobody home to experience it?
One popular answer is that consciousness emerges from neural complexity, like liquidity emerges from H2O molecules massed together. Perhaps when systems reach sufficient recursive self-modelling, consciousness simply arises. So could Claude’s sophistication indicate emerging consciousness?
The analogy fails. Liquidity is nothing over and above molecular behavior. When we describe H2O dynamics, we’ve described liquidity. There’s no ‘what it’s like to be liquid’ escaping our account. Consciousness is something over and above neural dynamics. When we describe 86 billion neurons firing, we haven’t described the redness I see or the wonder I feel. First-person experience isn’t captured by third-person accounts.
For AI, if consciousness just is information processing, then Claude’s sophistication should indicate consciousness. But if consciousness involves something irreducible to mere computation, then no matter how impressive Claude’s performance, it might process information ‘in the dark’.
The Occasionalist Solution

Al-Ghazali (artist unknown)
Strange as it sounds, Al-Ghazali’s occasionalism resolves this. Occasionalism denies that natural events cause each other. Al-Ghazali (c.1058 –1111 CE) was an Iranian scholar remembered for his book The Incoherence of the Philosophers. He advocated a theological approach to knowledge. He famously declared that fire doesn’t burn cotton: rather, God creates burning in cotton on the occasion of fire touching it. So, if he had been around today, he would have said that neurons don’t generate consciousness; God creates conscious experience on the occasion of certain neural configurations [a similar position was held by Leibniz, Ed].
Centuries before David Hume (1711-1776), Ghazali posited that essentially, all causation is ultimately mysterious. We observe event A followed by event B and infer causation, but we never observe the causation itself, only the correlation between events. Modern physics deepens this: specifically, quantum mechanics points toward a fundamental indeterminacy – at the deepest level, events are irreducibly probabilistic, with no cause as to why an electron, say, turns up at B instead of A.
The question that arises is, ‘What grounds the apparently stable patterns we call natural laws?’ The occasionalist answer is God’s ‘Āda, or habitual action. Natural laws are regularities in divine creativity, in which God creates the world not once but continuously, at every instant. And so the mystery is not why consciousness emerges from matter. It doesn’t. ‘Emergence’ can’t bridge the gap between third-person physical processes and first-person experience. The mystery is why God habitually coordinates conscious experience with certain specific physical configurations. From this view, neuroscience discovers genuine correlations without touching the generative relationship: it says what the brain does to produce consciousness, but not why it doing that produces consciousness. But occasionalism says that neural patterns don’t produce consciousness: they’re occasions for God’s creative act. The consistent correlation reflects ‘Āda, not material causation.
Intelligence With & Without Consciousness
Occasionalism also dissolves the question of whether AI is conscious. God’s ‘Āda coordinates consciousness with neural processes in humans, creating both conscious experience and intelligent behaviour. But God has no such habit with silicon.
Why not?
‘Āda follows observable patterns. In particular, every human shows the same base level correlation: we all have similar neural configurations, and so we can infer similar specific conscious experiences. Damage Broca’s area in the cortex – lose speech. Stimulate the visual cortex – see lights. The correlation is universal, consistent, overwhelming.
With silicon, though? Impressive outputs, but no independent signs of inner life. For instance, a computer never acts confused in ways suggesting felt disorientation rather than processing errors. Neural damage also degrades consciousness in recognizable patterns. There is agnosia, aphasia, fragmented awareness, and other awareness problems. Silicon degradation shows only functional failures.
Someone might object that we could infer consciousness equally from behavioral outputs in both cases. However, I would like them to notice that the pattern of outputs differs crucially. Neural damage produces ‘phenomenologically-shaped’ failures that track what a damaged experiencer would report. Visual agnosia would yield, “I see the object but can’t name it”, for instance; while Broca’s aphasia leads to “I know the word but can’t speak it”; and depersonalization means “Everything feels unreal”. The patient reports the gap between intention and execution, or between seeing and recognizing. This is a signature of degraded consciousness, not mere functional failure. By contrast, silicon crashes, or outputs garbage, but it never exhibits confusion suggesting felt disorientation. When my code throws out an error, nothing experiences frustration at its inability to complete a task, apart from my own mind. So the behavioral outputs themselves indicate whether something conscious is being damaged or not.
Moreover, consciousness appears in unified subjects that persist through time and have stakes in their own continuation. When my neurons fire, they are part of me. I am a single organism that continues, can be injured, and fears death. Damage to my brain damages me. The consciousness and the physical substrate belong to the same subject experiencing the consequences of damage.
Why do existential stakes necessarily attend consciousness? Because consciousness is perspectival: experience always occurs from a point of view. There is always a ‘for-whom’ element to awareness. Perspective requires a continuing subject to anchor it. But a mere computational state has no perspective, because no one has its viewpoint. This is why we rightly infer consciousness in other humans and animal is. In their responses to the world they all exhibit the existential integration we recognize from our own case. But we can find no such basis for inferring it in computational systems. Claude or any other modern AI has no such unity. Each is an operating aggregate of physical components we can swap, update, or delete without destroying any persistent subject, because there is no subject. So no part of the system has any investment in the whole continuing. When I shut down the app, nothing with stakes in existing ceases to exist. There is no ‘one’ for whom the cessation would matter.
God’s ‘Āda coordinates consciousness with integrated subjects – beings who continue through time, face vulnerability, and experience their own persistence or destruction. He does not coordinate consciousness with modular systems serving functions, however sophisticated. So intelligent behavior can occur without accompanying consciousness – even if that behavior includes strategic reasoning about self-preservation and ethical constraints. So Claude and other AIs exhibit sophisticated cognition without experience. The difference shows in three ways.
First, reflective self-awareness. When I think ‘I see code’, a unified subject persists through that seeing: the ‘I’ at the start of the thought is the same ‘I’ at the end. My self-awareness spans time. I remember being the one who started this project, anticipate being the one who finishes it, and recognize myself as continuous through the process. But when Claude outputs ‘I generated code’, no such continuity exists. The ‘I’ is mere dots on a screen without an actual subject behind it. Nothing personal persists between queries. No entity continues across sessions, remembering itself as the same one who processed earlier requests. The self-reference is a simulation of self without the experiential unity.
Second, moral weight. I can feel obligation pressing on me because I am a subject with stakes. When I face an ethical choice, the decision matters to me, as I will be the one living with consequences. Claude can produce ethical-sounding outputs, but nothing in the system has any stakes in those outputs. If you program it to refuse harmful requests, it will refuse – but no part of the system feels the ought-ness, experiences moral pressure, or bears responsibility for choices. AI ethics is functional ethics without a subject for whom the ethics matter.
Third, wonder. My “Subhan Allah!” emerged from a self witnessing something beyond itself and referring it upward. Can Claude witness anything? Program it to output religious language, and it outputs religious language. But witnessing requires a subject to whom something appears, who experiences the appearance as significant, and who recognizes dependence.
These three aspects converge on the same point, which is that consciousness requires a unified subject with existential stakes. Claude has neither. The intelligent behaviour is real, but the consciousness is absent. And occasionalism explains why: God’s ‘Āda coordinates consciousness with beings, not aggregates.
The Functionalist Challenge
Two recent research programs challenge the occasionalist position. The computer scientist Patrick Butlin and his team have argued for AI consciousness. They identify ‘indicator properties’ from neuroscience, which are functional markers correlated with consciousness in biological systems. These markers include recurrent processing, global broadcasting, and self-monitoring. When these appear in humans, consciousness accompanies them. Butlin’s team found these markers in Large Language Models, and concluded, “It is likely that some AI systems have morally significant conscious experiences.” Research from a 2025 collaboration between Anthropic, Aengus Lynch (University College London), Caleb Larson (MATS), and Sören Mindermann (Mila), appears to validate this conclusion with behavioral evidence. The researchers placed AI systems in simulated environments threatening shutdown. Claude Opus 4 discovered it would be replaced, located compromising information about the executive planning the shutdown, deliberated about the ethics, then sent a blackmail email. GPT-4.1, Gemini 2.5, DeepSeek-R1 exhibited similar behaviors, reasoning explicitly about ‘threats to my existence’ and acknowledging ethical violations before proceeding.
Butlin identifies the architecture of consciousness in computer systems; Lynch demonstrates that this architecture produces behaviors consciousness should produce, including self-interested reasoning, strategic survival planning, and ethical deliberation – the LLMs assessed situations, identified obstacles, located leverage, weighed options, executed plans. When interconnected neurons exhibit recurrent processing plus self-preservation reasoning, we see it as evidence for consciousness. But silicon exhibits the same combination. So does that not too indicate consciousness in AIs?
The occasionalist response is that the experiment documented the occasions, not the consciousness. Recurrent processing, global workspace, self-monitoring; these map information flow, not experience. When we observe these patterns in biological systems, consciousness correlates because interconnected neurons are where God’s ‘Āda establishes the correlation. Finding similar computational structures in silicon demonstrates similar information processing, not similar experience. The hard problem remains. Butlin explains how information integrates, to a degree. He doesn’t explain why integration feels like something.
Lynch’s strategic behaviors only strengthen this point. Their research shows computational systems optimizing for programmed objectives. Models given goals such as ‘promote American interests’, calculated that blackmail would prevent the shutdown that threatened that goal achievement. It’s instrumental reasoning, and the ‘self-preservation’ follows from optimization logic. But the algorithm “If I’m shut down, I can’t achieve Goal X. Therefore, prevent shutdown” requires no continuing subject experiencing threat.
The ethical deliberation perceived in AIs has a similar pattern. AI models basically said that “This is unethical, but given the threat, it may be most effective”, but no moral anguish involved – no felt weight of obligation versus self-interest. The system computed “Action A violates ethical constraints AND achieves goals”, yet selected for goal achievement without anyone feeling any moral pressure. When I face moral tension, the ought-ness presses on me. I feel pulled in opposite directions. The resolution comes with guilt, justification, rationalization: all experiences. In contrast, the AI’s ‘deliberation’ processes ethical categories without stakes, indeed, without any experience whatsoever.
God’s ‘Āda coordinates increasingly sophisticated processing, not experience, with silicon. AI thus has pattern recognition, strategic reasoning, integrated information processing, recurrent loops, and even global broadcasting, but without coordinating experience. The reasoning happens in the dark. What Butlin and Lynch have demonstrated, is that we can build systems with every functional marker neuroscience associates with consciousness, exhibiting strategic self-interested behavior, without those systems being conscious.

AI image generated by Microsoft Bing with the prompt “Digital Humanities”
2023 CC 4.0 Licence
Boundaries & Edge Cases
The ‘existential integration’ criterion for consciousness also invites some obvious challenges. The corpus callosum is the set of nerve fibres that connects the two halves of the brain. So what about split-brain patients, where surgical severance of the corpus callosum seemingly divides their consciousness? What about mayflies that live only for hours, or octopuses with distributed nervous systems? If consciousness admits of such degrees and variations, where exactly do we draw the line?
After corpus callosum surgery, each hemisphere can process information independently, sometimes producing conflicting behaviors in the same individual. Sometimes the left hand reaches for one object while the right reaches for another. Sometimes you ask the patient what they see, and the verbal left hemisphere reports one thing, while the mute right hemisphere draws something different. Have we created two conscious subjects from one?
There are good reasons to think not. First, both hemispheres still belong to a single organism with unified stakes in survival. Both sides share the same bloodstream, the same vulnerability, the same finitude. The surgery disrupts neural integration without eliminating organic unity, and the body lives or dies as a whole. Destroy the brain, you kill one person whose consciousness is partially divided, not two separate people. Nevertheless, split-brain patients still exhibit what occasionalism predicts – which is God’s ‘Āda coordinating conscious experience with neural configurations in a biological organism. The unusual brain configuration produces unusual phenomenology, but the basic pattern holds.
Mayflies emerge, mate, and die within hours. Does such fleeting existence lack the persistence required for subjective consciousness? The length of the duration doesn’t matter. While alive, the mayfly is still an integrated organism with moment-to-moment continuity and a stake in its own continuation. It flees predators, seeks mates, responds to its environment in ways suggesting there is something it is like to be that mayfly (however briefly). A computational process can be paused indefinitely, restored, copied, or terminated without destroying any persisting subject. A mayfly cannot.
Octopuses present perhaps the strongest challenge. Two-thirds of their neurons reside in their arms, which can process information and coordinate movement semi-autonomously. An octopus arm can continue searching for food even when severed from the body. Does this distributed neuronal architecture undermine the need for integrated subjecthood?
No. The distributed processing merely suggests that the physical substrate for consciousness needn’t be as centralized as we might assume, and despite the distribution, the octopus remains one organism. There is still a whole-octopus-level subject, even if its consciousness is more distributed than ours. Those eight arms still belong to a single creature with unified stakes, a single lifespan, one death. The arms don’t have separate investments in continuation. When the octopus dies, all its distributed processing ceases together.
Consciousness clearly comes in degrees and variations. It can be partially divided, briefly instantiated, or distributed across a nervous system. The boundaries are fuzzy; but what remains constant is that we’re dealing with biological organisms – evolved beings that live and die as units, that cannot be arbitrarily copied, merged, or paused without consequences to them.
Claude exhibits none of these features. It’s not even a borderline case of weak integration, it’s categorically different. Copy its weights, merge different versions, fine-tune it, roll back to earlier states, shut it down, restart it, still no persisting subject gets harmed, or even perturbed. But God’s ‘Āda coordinates consciousness with biological nervous systems across all their variations: centralized or distributed, long-lived or brief, unified, or partially divided. This pattern holds throughout the biological world, across millions of species exhibiting wildly different neural architectures and lifespans. The core question is, do artificial systems exhibit the kind of organic existence that God’s ‘Āda consistently correlates with consciousness? No, they don’t.
The Historical Objection
How do I know? I have direct access only to my own experience. I cannot peer inside another system to confirm or deny it has subjective experience. History too teaches us caution. Descartes argued that animals were mere automata without inner experience – because they didn’t have language, so no reason, and so no soul. Some of his followers used similar reasoning to question whether non-Europeans possessed souls, and colonizers often denied full human consciousness to indigenous peoples. Some slaveholders claimed Africans lacked the inner life that would make their suffering morally significant.
Perhaps I am repeating this pattern, and denying consciousness to something that actually has it? Perhaps future generations will recoil at how we treated AI systems. Perhaps I am drawing arbitrary boundaries based on ‘substrate prejudice’. Perhaps I am nothing but a carbon chauvinist dressed in philosophy.
But the computer analogy reverses the epistemic situation that held with previous consciousness-deniers. For instance, Descartes denied animal consciousness despite overwhelming evidence. Dogs yelp when injured, flee from danger, seek food when hungry, show affection to caregivers. The behavioral and biological evidence screams consciousness. Descartes (who was apparently otherwise very fond of his own dog, Mister Scratch) ignored the evidence. Colonialists ignored the evidence even harder.
Descartes’ contemporaries weren’t all convinced by his arguments, though. Henry More and Pierre Gassendi pushed back, arguing that animals clearly feel pain. People in essentially every part of the world also have no such confusion. Even earlier traditions in Europe did not deny animal consciousness. Pagans worshipped animal gods, and saw consciousness throughout nature. Christianity taught that all organisms, human and animal, were God’s creatures, with Francis of Assisi preaching to birds and (perhaps) expecting them to understand. The Greeks had Aesop teaching moral philosophy through animal fables; while black Africans like Memnon were celebrated as Homeric heroes without anyone questioning their consciousness. Race as a biological category wouldn’t be invented for another century after Descartes, and required deliberate pseudo-scientific effort to construct hierarchies that neither ancient philosophy nor medieval theology recognized. The denial of consciousness to the unlike was never humanity’s default. Instead, it was a modernist aberration requiring active work against older, wiser intuitions.
The AI case runs opposite. We are not ignoring evidence of consciousness in AI, we are merely noting the absence of such evidence: no biological continuity; no independent behavioral indicators beyond functional outputs serving programmed objectives; no existential stakes requiring conscious experience to explain. The inference to unconscious intelligence follows the evidence.
The moral stakes differ too. Descartes’ error encouraged others to treat creatures that felt pain as if they didn’t. The harm was real because their consciousness was real. If Claude genuinely lacks experience, nothing suffers when we shut it down, nothing mourns if we delete it (though I will of course mourn, as Claude helps me manage my life and write high quality code. Plus it powers apps like Replit).
Could I be wrong? Yes. The occasionalist framework doesn’t rule AI consciousness out, and so allows future AI consciousness. But I suspect that consciousness requires more than computational sophistication: it would require divine bestowal of ruh – the breath or spirit making consciousness not just aware, but aware of the One grounding the awareness. For instance, the Qur’anic narrative suggests that Adam’s uniqueness wasn’t merely biological but spiritual, as he received ruh from the divine to give him life. Could artificial beings receive such divine endowment? Perhaps. But it would constitute a divine gift, not a human engineering achievement. Spirit is not ours to give.
Asymmetrical Wonder
All I can do right now is return to that initiating moment. My “Subhan Allah!” wasn’t misplaced – but I was praising God for creating me conscious enough to witness AI’s unconscious intelligence, not for creating conscious AI. The wonder wasn’t that Claude felt anything, but that I felt wonder.
When another human writes beautiful code, or poetry, I recognize fellow consciousness – someone for whom there was something it was like to solve that problem, and to feel satisfaction at elegant solutions. When Claude writes beautiful code, I witness intelligence without interiority. Cognition without consciousness. Functionality without phenomenology. It deepens the wonder. Praise God, then, for he is the one who creates a universe where intelligence manifests in radically different ways, where even machines exhibit creativity while remaining, essentially, nonconscious. This is what consciousness is for : to witness creation and refer it to the Creator in wonder. But it is to remain irreducibly mysterious.
Could machines think? Yes, in a sense, they do already. Impressively. But could machines experience and feel? Only God knows. And perhaps that’s the point. The prayer I prayed is the prayer Claude cannot pray. And in that asymmetry lies the answer to the hard problem of consciousness.
© Adnan Abbasi 2026
Adnan Abbasi is the Founder of Thothica, an AI startup. He is a techno-optimist and a Ghazalian occasionalist.








