A new interdisciplinary report, which counts as one of its co-authors, Yoshua Bengio, the Turing Award winner, examines whether current or near-future artificial intelligence (AI) systems could be conscious, drawing on scientific theories about the basis of consciousness in the human brain. The paper doesn’t claim that existing AI is conscious, but it does suggest there are no “obvious technical barriers” to building such systems.
The report proposes that progress can be made on the perplexing issue of machine consciousness by looking at neuroscientific theories which enjoy empirical support and describe the functions associated with consciousness in humans. The authors argue that if a philosophical position called “computational functionalism” is correct – the idea that performing certain computations is sufficient for consciousness regardless of the substrate – then similar computations could also give rise to consciousness in AI systems.
The report surveys prominent neuroscientific theories of consciousness, including the “global workspace theory”, the “recurrent processing theory”, and “higher-order” theories. From these theories, the authors derive a list of “indicator properties” of consciousness, such as using algorithmic recurrence, metacognitive monitoring to distinguish signal from noise, and modelling sensorimotor contingencies. They suggest current machine learning techniques could be used to construct systems exhibiting these properties.
The report also examines whether any existing AI systems possess these indicator properties, considering large language models, a reinforcement learning agent controlling a simulated rodent, and other cases. The analysis illustrates how the indicator properties can be used to evaluate systems, but does not find strong evidence that current systems meet the conditions for consciousness.
A key conclusion is that conscious AI may be possible in the near term, perhaps within decades, if computational functionalism is true. The report stresses the need for further interdisciplinary research, given the profound implications of conscious AI. It also emphasises that we must consider the possibility of consciousness when developing increasingly capable AI systems.
The authors also point to two significant risks: under-attributing and over-attributing consciousness to AI. The former risk could lead to a situation where we create conscious AIs without recognizing them as such, while the latter could result in unjustly attributing consciousness where there is none.
The report represents a significant advance in rigorous, empirically-grounded analysis of machine consciousness. While not ruling out the possibility of conscious AI, the authors illustrate that we are not there yet. But we may be closer than many imagine.