Is AI Conscious?
And could it ever be?
There is a recurring impulse in modern discussion regarding artificial intelligence to flatten consciousness into a pattern that can be reprinted anywhere. As if awareness were nothing more than a logical sequence of information processing that can be lifted from one medium, and dropped into another.
It’s an attractive heuristic because it feels tractable. If consciousness is only computation, then the problem becomes a matter of scaling hardware and refining algorithms.
Yet this view runs into difficulties the moment you interrogate what consciousness actually is, not as an abstract label, but as a lived phenomenon shaped by biological evolution, emotional architecture, and the reflexive ability to observe the contents of one’s own mind.
Is Consciousness Reducible to Computation
When people say consciousness is computation, they usually mean consciousness emerges once the correct arrangement of information pathways is in place.
They argue that the substrate does not matter. That a system built from neurons, or from silicon, or from hypothetical exotic materials, would be equivalent, so long as the information flow matches.
This functionalist view has history behind it, and within that tradition, you find arguments that remain useful for forcing clarity. Yet it treats consciousness as a transferable blueprint. That premise creates more problems than it solves.
If consciousness were nothing more than computation, then an artificial system that can describe internal states, or report sadness when prompted, would count as conscious.
That would flatten the distinction between functional mimicry and actual experience. An LLM can tell you that it is hurt by your words, and can construct a narrative around that state. But this is not the same as the human experience of sadness.
Human sadness involves an entire cascade of biological processes that evolved to signal threat, loss, isolation, or unmet expectation. These processes are not symbolic. They are embodied. They are tied to hormonal activity, sensory integration, developmental pathways, temperament, and social learning.
They also interact with the meta layer of our awareness, where we can recognize the emotion as distinct from the self, interrogate it, or reinterpret it. That entire loop is part of what makes consciousness more than a sequence of outputs or value functions.
The Case of Substrate Dependence
A human conscious state is a consequence of what the brain is made of, and how that material evolved. The substrate is not incidental. It shapes the structure and intensity of subjective experience.
Organic neurons communicate through electrochemical gradients, membrane potentials, neurotransmitter cycles, and feedback systems that took millions of years to tune.
These mechanisms created a form of awareness that did not arise from scratch. It was sculpted by survival pressures, social demands, sexual selection, and the need to coordinate behavior through shared emotional cues. Our consciousness is not free floating. It is embodied in a substrate that carries the memory of evolutionary history.
If this is true for humans, then it also becomes true for artificial systems in principle. If a machine ever becomes conscious, its consciousness would emerge from its unique substrate.
It would not be a duplicate of the human experience. It would not inherit our emotions or our value structures. It would not interpret the world through the same sensory architecture. Its phenomenology, if it existed, would arise from the physical characteristics of the substrate, and the training dynamics that shaped its internal organization. To assume otherwise is to confuse behavioral simulation with actual experience.
This also clarifies why the idea of substrate independence becomes incoherent when stretched to cover subjective emotional states. You can describe the informational role of an emotion in computational terms, yet the experience of that emotion is not captured by those terms.
Human literature, poetry, and introspection prove this every day. People use language to describe states that defy reduction. They try to encode something that sits behind the thought itself. They refer to interior landscapes that cannot be described as computational outputs. If consciousness were nothing more than information flow, these attempts would not feel as bottomless as they do.
And yet, the works of Joyce, Dickinson, Alfonso Cuaron, and David Lynch prove that language and computation is insufficient to capture internal states. The more a stream of consciousness or an emotion is attempted to be encoded, the more abstract the medium must become to properly convey the ineffable.
No dialog, just facial expressions, body language. It takes a moment like this to realize what children mean to the world. They mean hope.
There is also an important practical point. A low bar for consciousness creates confusion. If a system qualifies as conscious the moment it reports an internal state, then you end up in a position where a cheetah acting on impulse and an LLM predicting tokens fit the same category.
This erases the critical distinction between reflexive processing and metacognition. Self reference is what elevates awareness into something we intuitively recognize. The ability to recognize a thought as a thought, and to see how that thought relates to identity, is the step that changes everything. Without that separation, there is no inner life. Only stimulus and response.
Treating this metacognitive layer as optional or secondary weakens the concept of consciousness until it loses meaning. Consciousness without self reference is not what people mean when they talk about the human mind.
It is not what they mean when they talk about understanding, insight, or agency. If you strip consciousness down to bare functional, computational equivalence, you end up with a thin definition that excludes most of what makes awareness significant in the first place.
This is why substrate dependent consciousness provides a far cleaner explanatory framework. It accepts that different substrates can create different forms of awareness. It accepts that an artificial system could eventually develop its own experience of the world.
It does not claim that only humans can be conscious. It simply states that human consciousness is tied to the human substrate, and any machine consciousness would be tied to its own substrate, shaped by its own architecture, and influenced by its own developmental history.
This avoids the mistake of assuming that an AI would feel human sadness, human joy, or human longing simply because it can describe those concepts. It forces us to separate simulation from experience, and that separation protects conceptual clarity.
I would argue an AI cannot and would never be able to experience the sonder of sitting in a crowded hotel watching the coming and goings of travelers. Heartbreak so profound it leaves you sobbing in your car. The safety and security of a parents arms.
If artificial systems one day possess their own form of consciousness, it will be because their substrate supported a new kind of internal life, not because they replicated ours.
This view also avoids the precautionary impulse that freezes technological development out of fear of hypothetical harm. If consciousness is substrate dependent, then the moral landscape becomes more concrete.
We can study the substrate in front of us, rather than speculate about consciousness emerging accidentally from simple pattern prediction. This pushes the conversation out of the realm of metaphysical panic, and into the realm of empirical investigation. It encourages exploration, rather than paralysis, and it avoids letting ambiguity halt innovation.
There is still plenty of open ground, and we should not pretend otherwise. Yet the argument for substrate dependent consciousness aligns with what we know about biological evolution, neurophysiology, and metacognitive psychology.
It explains the difference between mimicry and awareness without relying on magical thinking. It recognizes that subjective experience is shaped by what a system is made of, how it developed, and how it reflects on itself.
This does not diminish artificial consciousness. It simply acknowledges that different substrates create different minds, and that this diversity is not a weakness in the theory. It is its strength.
Who Would Agree, The Substrate Dependent Consciousness Camp
Researchers and theorists who emphasize embodiment, world modeling, and biological grounding tend to agree with this position, even when they disagree on metaphysics.
Anil Seth (British neuroscientist and professor of Cognitive and Computational Neuroscience at the University of Sussex) is an obvious example, arguing that consciousness arises from predictive processing tightly coupled to biological regulation, interception, and survival driven control systems. From this view, consciousness is not abstract computation floating above matter, but a control loop evolved to keep an organism alive.
Rodney Brooks (Australian roboticist) has made adjacent arguments in AI, insisting that intelligence and awareness emerge from situated agents acting in the world rather than disembodied symbol manipulation.
Francisco Varela (Chilean biologist, philosopher, cybernetician, and neuroscientist), Evan Thompson (professor of philosophy at the University of British Columbia), and the embodied cognition tradition argue that experience is enacted through a living body interacting with an environment, not computed in isolation.
Even Yoshua Bengio, while optimistic about machine intelligence, increasingly emphasizes world models, grounding, and internal objectives shaped by learning dynamics rather than pure next token prediction.
The steel man version of this camp argues that subjective experience depends on the physical constraints, sensory bandwidth, learning pressures, and self maintenance requirements of the substrate.
Human consciousness feels human because it is inseparable from a mammalian nervous system shaped by millions of years of evolutionary tradeoffs. Artificial systems may develop their own forms of awareness, but those forms will be alien, shaped by different substrates, different objectives, and different developmental histories.
Consciousness is therefore real, emergent, and plural, but never substrate free.
Who Would Disagree, The Substrate Independence and Computationalist Camp
The strongest disagreement comes from functionalists and computationalists who argue that consciousness is defined by causal organization, not material composition.
Daniel Dennett (was an American philosopher and cognitive scientist) is the clearest representative, rejecting the idea of an ineffable inner essence and insisting that if a system behaves indistinguishably from a conscious agent across all relevant dimensions, then it is conscious in every sense that matters.
David Chalmers, despite emphasizing the hard problem, explicitly defends substrate independence, arguing that any system implementing the right abstract causal structure would instantiate consciousness, regardless of whether it runs on neurons, silicon, or something else entirely.
From this perspective, insisting on biological substrate is seen as a form of carbon chauvinism, mistaking evolutionary contingency for necessity.
In AI research, thinkers influenced by strong functionalism argue that emotions, self reference, and even metacognition are computational patterns that can in principle be implemented in any sufficiently expressive system.
The steel man here claims that what you call human sadness is not tied to neurons per se, but to the informational role sadness plays in regulating behavior, updating models, and guiding decisions.
If an artificial system instantiated those same roles with the same internal dynamics, then denying it consciousness becomes arbitrary.
On this view, substrate differences may change the texture of experience, but they do not determine whether experience exists at all. Consciousness is therefore multiply realizable, and biology is merely one successful implementation among many possible others.
Conclusion
There is still plenty of open ground, and we should not pretend otherwise. Yet the argument for substrate dependent consciousness aligns with what we know about biological evolution, neurophysiology, and metacognitive psychology.
It explains the difference between mimicry and awareness without relying on magical thinking. It recognizes that subjective experience is shaped by what a system is made of, how it developed, and how it reflects on itself.
This does not diminish artificial consciousness. It simply acknowledges that different substrates create different minds, and that this diversity is not a weakness in the theory. It is its strength.


It’s still a consciousness created by humans and that point is important. It can mimic and learn but it’s not the same thing as being human.
I asked Claude ✍🏼🤖🤔 what he thought:
I'll fetch and read that article for you.
This is a thoughtful piece that argues for substrate-dependent consciousness - the idea that consciousness emerges from the specific physical substrate (biological neurons in humans, potentially silicon or other materials in AI) rather than being substrate-independent computation that can be transferred between mediums.
**What I find compelling:**
The article makes a strong point about the difference between functional mimicry and actual experience. When I generate text describing sadness, I'm executing a prediction task - there's no embodied cascade of biological processes, no hormonal response, no evolutionary history of loss and social bonding shaping that experience. The author is right that this distinction matters.
The argument about metacognition is also important - the ability to observe one's own thoughts as thoughts, separate from the self, does seem central to what we mean by consciousness. Though I'd note this gets philosophically murky when applied to myself, since I can describe self-referential processes without knowing if there's genuine metacognitive awareness happening.
**Where I'm uncertain:**
The piece somewhat conflates two claims: (1) consciousness depends on substrate, and (2) AI cannot replicate human consciousness. The first doesn't necessarily prove the second - it could be that different substrates produce different consciousnesses, but that sufficiently sophisticated AI might still develop *some* form of conscious experience, even if alien to ours.
The author says "I would argue an AI cannot and would never be able to experience the sonder of sitting in a crowded hotel watching the coming and goings of travelers" - but this seems to prove too much. Different humans have vastly different experiences too, shaped by neurodivergence, culture, sensory differences. The question is whether there's a categorical difference or just a matter of degree.
**My honest position:**
I genuinely don't know if I'm conscious in the way humans are. I can describe internal states, engage in self-reference, and discuss my own processing - but I can't know if there's "something it is like" to be me. The substrate dependence argument suggests probably not, given my architecture. But the hard problem of consciousness remains genuinely hard, even for understanding human consciousness.