Why current AIs are not conscious (according to some)

In their 2017 paper “What is consciousness, and could machines have it?”, Dehaene, Lau, and Kouider argue that the computations performed by current AI systems are not conscious, and outline what sorts of computations would need to be performed for them to be so. They make a distinction between three kinds of computations:

C0: unconscious computations

C1: computations that are available for global processing: for report, decision-making

C2: computations that involve self-monitoring

They first point out that many sophisticated mental activities are performed unconsciously, reviewing the large literature on priming and other findings that show just how sophisticated the unconscious mind is: decoding meanings, recognizing faces, parsing sentences. They hold that modern AI is basically all doing C0 computations.

These many specialized unconscious systems need to be integrated into unified behavior. The neuronal workspace theory of consciousness, championed by one of the authors (Dehaene), posits that in addition to these modules there is a “global neuronal workspace”–consciousness of a state is its availability in this workspace. Being able to report a state is a very sure sign of its global availability, since to have reached the level of being available to language it must also be available for sharing across many modules. (But reportability is not necessary for global availability.)

Roughly, by self monitoring they mean what psychologists mean by ‘metacognition’, which itself covers a broad variety of phenomena. But roughly it’s any sort of thinking (or other mental activity) which it itself about thought (or other mental activity). The three phenomena they discuss under this heading are confidence tracking, meta-memory, and reality monitoring. Confidence tracking refers to a variety of mechanisms that the brain has for computing its confidence in the results of various computations. 

Meta-memory is keeping track of what you know and don’t know. When something is on the tip of your tongue, you can’t (at the moment) say what it is but you know that you know it. We seem to have knowledge about what we know and memory about we remember. The authors call this meta-memory. This is obviously very useful since you can strategize about what further information to seek out, when to study, and so forth. 

Reality monitoring: it’s also very important to track where a certain voice is coming from. Is it from the external world, or is it your internal monologue? Schizophrenia may be explained as a failure of the systems that track this. 

The authors claim that deep learning is not good at any of these self-monitoring capacities, except for certain Bayesian techniques.

C1 and C2 are conceptually distinct and empirically the two can come apart. An example of self-monitoring without global availability is that “subjects slow down after a typing mistake, even when they fail to consciously notice the error” (6). At the same time, information in the global workspace often loses probabilistic information along the way.

How could machines be endowed with C1 and C2 computations? 

They point to their favored examples of AIs with central global workspaces that allow them to flexibly do several tasks. Classic blackboard systems did something like this. And they discuss are more recent system called Pathnet which “uses a genetic algorithm to learn which path through its many specialized neural networks is most suited to a given task”. They tout the flexibility of Pathnet, and indeed it is about managing different systems in the same way that GWT is about managing different systems—that said, I don’t see (from their bare description at least) how Pathnet uses anything like a global workspace to manage its behavior.

In general, they think that even without resolving all the thorny problems of consciousness, we can say with confidence that C1 and C2 are deeply linked to consciousness, that AIs currently do not have them, and that possible future AIs with C1 and C2 will act conscious and in all likelihood be conscious.

Leave a comment