paint-brush
Can Androids Experience Qualia?by@roninwinter
309 reads
309 reads

Can Androids Experience Qualia?

by Ronin WinterFebruary 6th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

There seem to be a host of variables at play in determining conscious states — and whether a certain being is conscious or not.

Company Mentioned

Mention Thumbnail
featured image - Can Androids Experience Qualia?
Ronin Winter HackerNoon profile picture


qualiaplural, noun: a quality or property as perceived or experienced by a person.

What exactly is consciousness?

There seem to be a host of variables at play in determining conscious states — and whether a certain being is conscious or not.

The first variable is an awareness of one’s surroundings: this can be easily recognised by waking and dreaming states in which consciousness does reside as opposed to deep sleep states such as under the influence of anaesthetics in which consciousness does not reside.

This dichotomy of awareness states represents absolute cases in which consciousness can be easily determined, thus rendering it an easy problem in consciousness. This separation between conscious states and unconscious states seem to be deterministic, however: a conundrum arises in determining the consciousness of people in vegetative states. This is a condition in which patients only demonstrate partial arousal rather than reflecting a true awareness as found in waking or dreaming states. Patients with this condition seem to be in a spectrum between consciousness and unconsciousness.

The second variable is introspection; this ties in with the first variable: awareness of one’s surroundings but this variable is the awareness of one’s self and one’s own thoughts or subjective experience. This is much less deterministic and recognisable, as we sense that human beings do possess this capability but we are more dubious if other animals possess such a quality.

Yet, you could still postulate whether human beings have this quality as demonstrate by the thought experiment developed by David Chalmers called Philosophical Zombies.

Philosophical Zombies and the Introspection Variable

A Philosophical Zombie is a certain being (zombie) that is indistinguishable from human beings yet lacks conscious experience. The question that Chalmers poses is how we would know if other human beings are not zombies and what differentiates a zombie from a conscious human being.

The variable of introspection is famously tested out in the Mirror Test which was developed by a psychologist called Gordon Gallup Jr. The test determines whether an animal is capable of self-recognition, which is done by anaesthetising an animal and then marking with some paint or a sticker on an area of the body the animal cannot normally see.

When the animal recovers from the anaesthetic, it is given access to a mirror. If the animal then touches or investigates the mark, it is taken as an indication that the animal perceives the reflected image as itself, rather than that of another animal¹.

Great apes, elephants, dolphins, orcas, and the Eurasian magpie have been able to pass the test while other notable species such as dogs and some species of monkeys have been failed this test.

The test also doesn’t fully comprise what is determined as introspection and self-recognition as some animal species are able to differentiate their own scents and sounds from other animals. This set of variables provides a fairly intuitive notion of what constitutes consciousness, however, significant inconsistencies arise from this perspective.

Thus, we will now explore a more consistent yet unintuitive notion of consciousness set out by neuroscientist Giulio Tononi in his Integrated Information Theory.

Integrated Information Theory (IIT)

As of now, the relationship between consciousness and the brain has yet to be fully established, and still remains elusively incomprehensible. IIT attempts to establish a theoretical framework of consciousness thus separating from empirical studies that attempt to investigate the intuitive notion of consciousness highlighted above.

By disentangling itself from the brain and empirical investigation, IIT is able to lay out a set of axioms that establish certain essential qualities of consciousness; enabling it to present a framework to infer the given quality and quantity of consciousness in certain organisms and systems.

A radical implication of adhering to this viewpoint is that it employs a view of Panpsychism in that everything has some form of consciousness even at a rudimentary level. According to IIT, consciousness is determined by the integration of information within a given system; and this is represented by the mathematical quantity ɸ (phi).

In IIT, the first variable that determines the consciousness of a system is its cause-effect structure: a given system must holistically contain a cause-effect mechanism enabling it to change its structure and state and it also must present a reductionist cause-effect mechanism through the parts that constitute the system.

For example, the brain satisfies this variable as it is able to holistically change its state through the pattern of its neural pathways, similarly its fundamental constituents: neurones also satisfy the variable as it is also able to change its state which is either firing or not firing through the inputs that are given to the neurone which initiate its action potential.

This axiom enables experiences to be unique to have certain qualia which arise from the specific states that arise from the cause-effect structure in a given system. The second variable or axiom is integration; it determines to what extent is the emergence of consciousness in a given system arising from the integration of its parts. An example of this is if we cut the brain in half, the brain will not be conscious as it is integrated in such a manner that a small change in its structure would render it as unconscious. The third variable is maximality³, this states that a given conscious system must have the maximum amount of integrated information; what this means is that the given system must have a higher integration of information than its parts. Thus, the brain as a whole needs to have higher integration than its neurones for it be more conscious.

By laying the foundations of consciousness, we are then able to begin answering the question of “Can Androids Experience Qualia?” According to Integrated Information Theory, Artificial Intelligence (AI) systems already are conscious but have a low ɸ quantity because of the lack of integration present in the structure of computers: the physical substance that embodies an AI. The fundamental constituents of computers which are transistors are not as integrated as those of brains which are neurons. Neurons are very fluid in that they are able to form multiple dendrites that connect to other neurons thus forming new inputs whereas transistors are not able to do that; they have been designed and engineered in such a manner to follow the architecture that it is laid out in it (transistors are connected to other modules in the computer through buses and they cannot change that structure themselves e.g neurons).

Another example of the lack of integration and interconnectedness in computers is in how they store memory.

Memory is stored in binary representation within given storage components such as in primary storage and secondary storage, this data is also stored sequentially in very specific locations. On the other hand, brains store memory not in specific locations but rather in multiple locations and in multiple neurons thus forming neural pathways that produce specific instances of memory. As AI systems are embodied through computers it will be very improbable for it to be as conscious as humans in terms of their ɸ quantity.

However, if the architectures of computers were to be dramatically altered allowing for greater integration and interconnectedness of its parts then there is no reason as to why an AI can’t be as conscious as humans. Recent advances in computing are being made e.g: Quantum Computing and Optical Computing; these largely result from the problem of transistors getting smaller and smaller as a result of the integrated circuits of computers becoming smaller.

This reduction in size is facing computational limits in the laws of physics, in order to cope with this, scientists and engineers have devised potential solutions through other methods in computing which require different substrates. Thus, if the substrate in which an AI system resides in demonstrates high integration then it is very much probable for an AI to display a high level of consciousness; however, the current physical substrate that it resides in (classical computers) do not exhibit a high integration.

I would also like to add a point that is Consciousness ≠ Intelligence, there seems to be a relationship between the consciousness of the brain and its intelligence but it still remains a mystery as to what that relationship is; correlation does not imply causation — so, even if consciousness has a relationship with intelligence, it does not mean that one causes the other to arise.

Through the current architecture and substrate that AI is embodied in, it has already made significant achievements previously thought highly improbable e.g winning against the Go Champion and autonomous driving. It also has the potential to advance exponentially to superhuman capabilities in some tasks and even to a point of achieving Artificial General Intelligence (AGI) meaning human-level capabilities on a large array of tasks. Despite this potential, this does not imply that it will be conscious; an AI could demonstrate superintelligence yet the lights would still not be on, it would still not be able to experience qualia.

Consciousness is highly dependent on the physical substrate in which it resides and, if there were to be a superintelligent system, it would still not be conscious if its substrate was still classical computers.

Let us go back to the more intuitive notion of consciousness and try to answer the question from there.

Consciousness Explored

The first point that I would like to make is that consciousness arose in humans as a result of Darwinian evolution. This formation occurred over millions of years spanning across generation upon generation.

Somewhere along this evolutionary journey, consciousness arose.

Thus, the argument is that AI has not gone through this formation so it will be highly improbable for it to develop consciousness. Despite this, there is no inherent reason as to why consciousness should be limited to carbon-based biological organisms according to the laws of physics it should also be extendable to mechanical and electrical systems. A potential solution to develop a consciousness for AI is to then imitate the evolutionary journey of humans; by mimicking natural selection and Darwinian evolution, AI systems which demonstrate the potential for conscious thought could then be selected and mutated so as to proliferate their genes onto the next generation. In fact, Darwinian evolution has already been implemented in Computer Science through Genetic algorithms⁴, which attempt to generate solutions to search and optimisation algorithms through a procedure akin to that of natural selection.

Consciousness could be achieved in this manner as the laws of physics do pertain to it, however, a much deeper understanding of the brain must be established in order for AI to imitate a human’s cognitive architecture and as of now at large the brain remains a mystery.

If AI progresses until it is able to be embodied in humanoids to a point that they are indistinguishable from a normal human being then the question of whether the AI is conscious or not wouldn’t matter.

This notion is enacted in the television series Westworld, where indistinguishable humanoid robots interact with normal humans in a theme park.

If the humanoid would demonstrate human actions and emotions up to the finest detail — hence indistinguishable — then we would prescribe consciousness into it anyways, as it would be so lifelike and real that we would act in a manner as if the lights were on and the being could experience qualia.

As consciousness at large remains a mystery and hard to grasp, instances like this would disregard consciousness altogether as the AI has imitated humans so well that it has passed the Turing Test.

What is then the overarching conclusion to all this, and what this the answer to the question?

Well, there are many possible perspectives and viewpoints that could be used to answer this, I’ve put forth three.

The first is based on Integrated Information Theory which states that AI is already conscious but in a low quantity and if it were to display human-level consciousness its substrate would need to be altered to allow for greater integration.

The second perspective is that consciousness is not limited to carbon-based life forms yet it has arisen from Darwinian evolution and if AI were to be conscious it would need to partake on a similar evolutionary journey as humans and other biological organisms. The third viewpoint is that it would not matter if an AI were conscious or not, if it was to be virtually indistinguishable from humans we would nonetheless prescribe human-level consciousness to it.


“Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.” ― Erwin Schrödinger

[1]: Wikipedia. (n.d) Mirror Test. https://en.wikipedia.org/wiki/Mirror_test

[2]: Hedda Hassel Mørch (2017) The Integrated Information Theory of Consciousness. https://philosophynow.org/issues/121/The_Integrated_Information_Theory_of_Consciousness

[3]: Wikipedia. (n.d) Genetic algorithm https://en.wikipedia.org/wiki/Genetic_algorithm