AI should not develop consciousness

Would it be desirable for artificial intelligences to develop consciousness? For various reasons, probably not, says Dr. Wanja Wiese from the Institute of Philosophy II at the Ruhr University Bochum.

Artificial intelligences (AI) should not develop consciousness. This is the conclusion reached by Wanja Wiese from the Ruhr University Bochum (RUB). “The causal structure could be a difference relevant to consciousness,” argues the expert in a paper published in “Philosophical Studies”. Essay.

A question of risk

“On the one hand, the risk of accidentally creating artificial consciousness should be reduced; this would be desirable because it is currently not clear under which conditions the creation of artificial consciousness is morally permissible. On the other hand, deceptions by apparently conscious AI systems that only act as if they were conscious must be ruled out,” says Wiese.

This is particularly important because there is already evidence that many people who often interact with chatbots attribute consciousness to these systems. At the same time, there is a consensus among experts that current AI systems do not have consciousness.

Survival of an organism

Assuming that consciousness contributes to the survival of a conscious organism, from the perspective of the principle of free energy there must be a trace in the physiological processes that contribute to the maintenance of the organism that the conscious experience leaves behind and that can be described as an information processing process.

Wiese: “This can be called the ‘computational correlate of consciousness’. This can also be realized in a computer. However, it may be that further conditions must be fulfilled in a computer so that the computer not only simulates the conscious experience but replicates it.”

Source: www.com-magazin.de