Since many people have recently been impressed with the capacity of OpenAI’s ChatGPT-4, a collaborator (Ushnish Sengupta) and I decided to ask the LLM what necessary and sufficient criteria it would use to tell with certainty whether or not a brain organoid or an LLM such as itself is phenomenally conscious. Here’s what it said___
“Phenomenal consciousness refers to the subjective experience, the "what it feels like" aspect of consciousness. It is often referred to as qualia, the singular, individual instances of conscious experience. The challenge of quantifying phenomenal consciousness lies in its inherently subjective nature. However, we can approach this from different angles, attempting to derive quantifiable metrics based on observable behavior or measurable neurological activity. Here are a few speculative criteria:
These criteria are, of course, not without problems. For example, they may risk over-attributing consciousness to systems that are merely complex or adaptive without being conscious. Moreover, they don't directly address the hard problem of consciousness - the question of why and how these physical processes should give rise to subjective experience. Nonetheless, they may provide a starting point for empirical investigations into the nature and extent of phenomenal consciousness.”
When it comes to people in a coma, under anesthesia, in a fetal state, or when we consider invertebrate animals, brain organisms, or advanced AI systems, we may be dealing with beings that are to some, potentially only minimal, extent conscious. They may only just about be conscious. We will therefore refer to these cases as instances of borderline consciousness. Investigating them is important, as legal protection (e.g., of brain organisms) or even life and death (life support withdrawal in comatose patients) may sometimes depend on whether someone or something is conscious.
Our notion of 'borderline consciousness' here is related to but different from what Schwitzgebel (2021) calls instances of "in-between" mentality: “when it’s neither determinately true nor determinately false that experience is present.” Schwitzgebel aims to show that such a state of consciousness exists. This is challenging as we can't introspectively detect such states (if we could, they would be fully conscious). Schwitzgebel thus argues for borderline consciousness indirectly. He first holds that as far as the distribution of consciousness is concerned, we are faced with a situation where a choice needs to be made between four undesirable options: (1) only the human being is conscious, (2) everything is conscious (panpsychism), (3) there is a strict distinction between conscious and unconscious state, (4) we must accept what Schwitzgebel calls borderline consciousness.
Based on common sense, (1) and (2) are easy to reject (almost everyone may accept that dogs are sentient but electrons aren't). (3) appears more robust. But different empirical arguments that are partly based on developmental and evolutionary theories can be mobilized to challenge it. Relying on these theories, it can be shown that consciousness doesn't arise from nowhere. Consequently, there is no “line in the sand” between unconscious and conscious states, between the fetal and infant states, or between different stages of the evolutionary lineage of human beings. Moreover, all the physiological criteria traditionally used to measure consciousness (e.g., activation of a global network, attention to representations of intermediate order, phi (IIT), etc.) appear to be gradual, suggesting that there is no clear boundary between conscious and non-conscious states. Hence, there are good intuitive and empirical reasons to accept (4).
Our project is related to Schwitzgebel's notion of "in-between consciousness". This is because we will primarily focus on cases in which consciousness may be so minimally present that it is challenging to determine whether it is present at all. The potentially unusual kind of minimal consciousness that we here call 'borderline consciousness ' (e.g., in brain organoids) may well turn out to be "in-between consciousness". But this remains to be seen. The key objectives of the project are to investigate ways of finding out (a) whether, for example, brain organoids or sophisticated AI systems are conscious at all, (b) what this consciousness might consist of (e.g., sentience, valenced experience, self-awareness, etc.), and (c) what ethical and epistemic risks may arise in the science of consciousness related to these borderline cases.