AI and (lack of) Consciousness: what problems does this raise?

Machines could one day become conscious, with catastrophic consequences. At the moment, they do not have one, and it is also a problem because they lack morality and doubt.

The machines are, for the moment, not conscious. The debate is raging among experts: is it possible to one day achieve singularity, that is, a form of consciousness in machines with the most advanced AI? We should first agree on the concept of consciousness and in particular on the “hard problem of consciousness” as presented by David Chalmers, philosopher of the mind.

What is consciousness? Let’s refer to the CNRTL:Organization of the psyche of an individual who, by allowing him to know his states, his actions and their moral value, allows him to feel exist, to be present to himself.”

The hard problem of consciousness, on the other hand, refers to the problem of the origin of qualia, i.e. the subjective content of the experience of a mental state, and divides philosophers on the question of whether the mind is separated from the body (the body-mind dualism) or whether they are one (monism). I will not go into the details of these concepts, which, while fascinating, would take too long to explain. If you want to learn more about the subject, I recommend the excellent MOOC “Minds and Machines” of MIT. It perfectly explains the various philosophical problems and currents of thought on the subject.

Without this clear understanding of the phenomenon of consciousness, how can we detect that a machine reaches a form of conscious intelligence?

This question is legitimate because many experts believe that AI-based systems could probably reach human capabilities by 2040–2050, and most certainly by 2075. (Source: Future Progress in Artificial Intelligence: A Survey of Expert Opinion)

This raises many concerns, especially the famous “singularity”. David Chalmers explains it in his paper”The Singularity: A Philosophical Analysis”:What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the ‘singularity’.

This singularity is often associated with the end of the world, as many experts fear, such as Elon Musk, Stephen Hawking, Nick Bostrom.

Fortunately, other experts are more skeptical of this prediction and strongly doubt that it will ever come true.

At the moment, AI is not conscious, and ironically, this is part of the reason why they cannot be totally trusted, a problem that is currently relevant.

As Daniel Dennett, another famous philosopher of the mind, says in his book “From Bacteria to Bach and Back: The Evolution of Minds”: “The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.”

AI is already ubiquitous in our daily lives, and we trust them too much, not without consequences:

Unfortunately, because machines are not conscious, they are also devoid of moral values. According to the philosopher Carissa Véliz, having moral abilities is dependent on being sentient. Algorithms that are devoid of sentience cannot therefore be autonomous nor held responsible for anything:”they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.”

According to Stephen Fleming, a neuroscientist, what machines lack is doubt: machines ”don’t know what they don’t know, a ability that psychologists call metacognition. Metacognition is the ability to think about one’s own thoughts — to recognize when one may be wrong, for example, or when it would be better to look for a second opinion.”

Artificial Intelligence researchers have known for some time that machines tend to be far too confident in their results. For example, when faced with something new, they will predict a completely wrong outcome with a high level of confidence, rather than admit their limitations.

Researchers propose to integrate an introspective ability to robots: to allow them to identify their probability of being fair before making a decision, rather than after, by assessing the level of confidence, including realizing that a situation had never been seen before. Simulating metacognition and doubt in machines would improve the confidence we can give them.

This approach is also adopted by MIT researchers who have developed a technique that allows a neural network to give a prediction accompanied by a confidence level. “A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.”

The fact that AIs are not aware, and therefore not endowed with moral values or doubts, is a problem at present. But the fact that machines could acquire a form of consciousness would be a serious problem as well, and that we would not necessarily be able to detect. In both cases, work on ethics and AI is essential and we can only be alarmed that Google has recently fired the head of their ethics department… Fortunately, the European Commission has begun to address these issues, and a proposal for regulations was tabled on 21 April 2021,although much remains to be done.


Translated from

Data practice Manager @Adone, Co-founder @AMASAI // 👩‍💻 Business, Data, IA, Project Management, Ethique, Tech