The development of artificial intelligence has made impressive progress in recent years. At the same time, the idea that “real” artificial intelligence must also develop consciousness persists. However, this assumption is neither technically sound nor practically sensible, as the following analysis shows.
A fundamental misunderstanding in the discussion about artificial intelligence is the implicit equation of consciousness with cognition. This idea reduces consciousness to its cognitive aspects — as if consciousness were nothing more than intelligent thinking and problem-solving. However, this fundamentally fails to recognize the true nature of consciousness.
Consciousness encompasses much more than just cognition. It includes subjective experience, feelings, physical sensations, a sense of self and a phenomenal quality of experience. The idea of a “purely cognitive consciousness” would be a contradiction in terms — it would be like color vision without colors or listening to music without sound. Consciousness is by definition experienced, felt consciousness — and not only by definition, but by self-experience.
This definition leads to a fundamental tautology: the “self” of consciousness is always presupposed when we speak of consciousness. A consciousness without self would be like seeing without seeing — logically impossible. This self-referentiality is not simply a feature of consciousness, but its basic structure. “Self-awareness” is not a quality that can be added to an already existing system, but the prerequisite for being able to speak of consciousness at all. This fundamental circularity makes it impossible to construct or program consciousness from the outside.
In our attempt to create artificial consciousness, we encounter a fundamental epistemological dilemma: we try to understand what subjectivity is as subjects. Our thinking should understand what thinking is. So we are supposed to implement something that we only know from the inside perspective — as self-affected people.
This is not a trivial difficulty, but a problem of principle: How can we construct something from the outside that we only know from the inside? How are we supposed to objectify the structures of consciousness if we only experience them subjectively? It is as if we were trying to draw its exterior view from the inside of a house without ever being able to leave it.
This epistemological barrier reinforces the technical and conceptual difficulties in the development of artificial consciousness. It shows that the problem is not only a practical or technical one, but a fundamental philosophical dilemma.
First of all, it is therefore important to make a clear distinction between intelligence and consciousness. Intelligence in the sense of problem-solving skills and information processing is something that we can already implement very well technically. A simple calculator demonstrates a form of highly specialized mathematical intelligence — without even being aware of it. Modern AI systems can translate languages, recognize patterns and master complex strategy games, all without subjectively experiencing these activities.
Consciousness, on the other hand, refers to the subjective quality of experience, the “how it feels” to experience something. It is the inner perspective of a system that perceives itself as an experiencing subject. This distinction already makes it clear that high intelligence is possible without any consciousness.
A major reason why digital systems cannot develop consciousness lies in the nature of how they work. True consciousness requires self-organizing, autocatalytic processes — systems that catalyze their own emergence and development. This self-organization is far more complex than often assumed.
A conscious system first requires a highly complex, self-organizing nervous system. This cannot simply be “constructed”, but must be created through self-organization. In biological systems, this neuronal architecture develops through the interplay of genetic information and environmental influences during development. The resulting structures are not determined, but emergent — they arise from the interplay of many factors and feedback loops.
The neuronal architecture must have a special property: it must enable self-referentiality. The system must be able to “observe” and process its own states. This self-referentiality is not simply a matter of programming feedback loops. Rather, it requires an emergent architecture that relates different levels of processing to each other and thereby produces a kind of “inner observer”.
From this self-referential architecture, self-awareness must then develop — the ability to perceive and experience oneself as an independent entity. This process is fundamentally autocatalytic: the emerging self-awareness strengthens and stabilizes the processes that produced it. It is a self-reinforcing cycle that cannot be “built-in” from the outside.
After all, the system must be able to make autonomous decisions about action. These decisions must come from the system itself, based on its own interpretation of the situation and its own “values” or “goals”. This is fundamentally different from programmed decision-making in artificial systems, which is based on predetermined algorithms and evaluation criteria.
Digital systems are programmed deterministically by their very nature. They carry out predetermined instructions, but cannot develop emergent properties in the true sense of the word. Even if one tried to program self-organizing processes, the result would only be a simulation of self-organization, not real emergence.
This contradiction between control through programming and true self-organization is fundamental. The development of consciousness requires a process that eludes direct control and programming — which is in direct contrast to the principles of digital information processing.
But even if you disregard the technical hurdles, the question arises: Why should an artificial system have consciousness at all? A truly conscious system would probably be less effective for specific tasks than a purposefully optimized non-conscious system due to its necessary autonomy and self-referentiality.
The resources that would flow into the development of artificial consciousness would be far better invested in improving concrete functionalities. Artificial systems can also be highly effective without consciousness — perhaps even more effective because they are not “distracted” by subjective experience.
A look at the evolution of human consciousness is also revealing. The combination of consciousness and technical intelligence, as we find it in humans, was by no means an inevitable development. Rather, it is the result of a specific, contingent evolutionary path.
Even if an artificial evolution were allowed to run for millions of years, there would be no guarantee that conscious life would evolve — let alone a combination of consciousness and technical intelligence. The assumption that this process can be deliberately brought about or even accelerated fails to recognize the fundamental nature of evolutionary development.
Finally, one must not ignore the ethical consequences. If it were actually possible to create an artificial conscious system, one would have created a sentient being. This would have to be regarded as a moral subject that could not simply be switched on and off or instrumentalized at will.
The ethical problems that would arise from this would be enormous: What rights would such a system have? What duties would we have towards him? How could you make sure that it doesn’t suffer? These questions show that the development of artificial consciousness would not only be technically questionable, but also ethically highly problematic.
The development of artificial consciousness is neither technically realistic nor practically sensible. Instead, we should focus on further developing the specific strengths of artificial systems: their ability to efficiently process information, recognize patterns, and solve problems. These skills are valuable enough — they don’t need awareness to be useful.
The idea that “real” artificial intelligence must have consciousness stems from an anthropomorphic bias — the assumption that intelligent systems must be like humans in order to be “really” intelligent. This assumption is not only wrong, it also hinders progress in AI development by directing resources in an ultimately fruitless direction.
Artificial intelligence should be developed and valued for what it is: a powerful tool for solving specific problems, not an attempt to replicate human consciousness.