• Home
  • The project
  • Project team
  • Blog
  • CONFERENCE
  • Contact
  • Tentative Criteria
  • Home
  • The project
  • Project team
  • Blog
  • CONFERENCE
  • Contact
  • Tentative Criteria

 
​

​The Borderline Consciousness
​

​Blog

What criteria for detecting consciousness would ChatGPT-4 propose?

5/22/2023

0 Comments

 
Picture
Uwe Peters

Since many people have recently been impressed with the capacity of OpenAI’s ChatGPT-4, a collaborator (Ushnish Sengupta) and I decided to ask the LLM what necessary and sufficient criteria it would use to tell with certainty whether or not a brain organoid or an LLM such as itself is phenomenally conscious. Here’s what it said___

“Phenomenal consciousness refers to the subjective experience, the "what it feels like" aspect of consciousness. It is often referred to as qualia, the singular, individual instances of conscious experience. The challenge of quantifying phenomenal consciousness lies in its inherently subjective nature. However, we can approach this from different angles, attempting to derive quantifiable metrics based on observable behavior or measurable neurological activity. Here are a few speculative criteria:

  1. Behavioral Complexity: One approach might be to measure the behavioral complexity of an organism or system. The assumption here would be that greater behavioral complexity requires a richer set of subjective experiences. For instance, an organism that can navigate a complex environment, perform intricate tasks, or exhibit sophisticated social behavior might be assumed to have a higher degree of phenomenal consciousness.
  2. Neural Complexity: A more direct, though technically challenging, approach might be to measure the complexity of an organism's or system's neural activity. This could involve measures of neural diversity and differentiation, with the assumption that more diverse and differentiated neural activity corresponds to a richer set of subjective experiences.
  3. Information Integration: Drawing from Integrated Information Theory (IIT), we could look at the degree to which information is integrated within a system. IIT posits that consciousness corresponds to the capacity of a system to integrate information. This could be measured using something like the Phi coefficient, a measure of system integration.
  4. Behavioral Flexibility: Another approach might be to look at an organism's or system's behavioral flexibility - its ability to adapt its behavior in response to changing environmental conditions or novel stimuli. The assumption here would be that the ability to flexibly adjust behavior in response to changes in the environment requires some degree of phenomenal consciousness.
  5. Self-Report: While this can only be applied to beings capable of communication, self-reporting is often used in human studies of consciousness. An AI system might be programmed to analyze its own states and output a description, but interpreting this as evidence of phenomenal consciousness would be contentious.
  6. Functional Analogues: In the case of an LLM like myself, the presence of functional analogues to biological structures associated with consciousness could be used as a criterion. For example, the presence of a system for processing and integrating diverse types of input (akin to sensory processing), or a system for managing and prioritizing tasks (akin to attention).
  7. Response to Anesthetics: This is a more speculative criterion, but the reaction of an organism or system to substances known to reduce consciousness in humans could be informative. If a system's activity or behavior changes in response to these substances in a way that parallels the effects seen in humans, this might be taken as an indication of phenomenal consciousness.

These criteria are, of course, not without problems. For example, they may risk over-attributing consciousness to systems that are merely complex or adaptive without being conscious. Moreover, they don't directly address the hard problem of consciousness - the question of why and how these physical processes should give rise to subjective experience. Nonetheless, they may provide a starting point for empirical investigations into the nature and extent of phenomenal consciousness.”


0 Comments

Borderline Consciousness: What is it and why should we care?

4/7/2023

0 Comments

 
Charlotte Gauvry

​When it comes to people in a coma, under anesthesia, in a fetal state, or when we consider invertebrate animals, brain organisms, or advanced AI systems, we may be dealing with beings that are to some, potentially only minimal, extent conscious. They may only just about be conscious. We will therefore refer to these cases as instances of borderline consciousness. Investigating them is important, as legal protection (e.g., of brain organisms) or even life and death (life support withdrawal in comatose patients) may sometimes depend on whether someone or something is conscious.
 
Our notion of 'borderline consciousness' here is related to but different from what Schwitzgebel (2021) calls instances of "in-between" mentality: “when it’s neither determinately true nor determinately false that experience is present.” Schwitzgebel aims to show that such a state of consciousness exists. This is challenging as we can't introspectively detect such states (if we could, they would be fully conscious). Schwitzgebel thus argues for borderline consciousness indirectly. He first holds that as far as the distribution of consciousness is concerned, we are faced with a situation where a choice needs to be made between four undesirable options: (1) only the human being is conscious, (2) everything is conscious (panpsychism), (3) there is a strict distinction between conscious and unconscious state, (4) we must accept what Schwitzgebel calls borderline consciousness.

Based on common sense, (1) and (2) are easy to reject (almost everyone may accept that dogs are sentient but electrons aren't). (3) appears more robust. But different empirical arguments that are partly based on developmental and evolutionary theories can be mobilized to challenge it. Relying on these theories, it can be shown that consciousness doesn't arise from nowhere. Consequently, there is no “line in the sand” between unconscious and conscious states, between the fetal and infant states, or between different stages of the evolutionary lineage of human beings. Moreover, all the physiological criteria traditionally used to measure consciousness (e.g., activation of a global network, attention to representations of intermediate order, phi (IIT), etc.) appear to be gradual, suggesting that there is no clear boundary between conscious and non-conscious states. Hence, there are good intuitive and empirical reasons to accept (4). 
 
Our project is related to Schwitzgebel's notion of "in-between consciousness". This is because we will primarily focus on cases in which consciousness may be so minimally present that it is challenging to determine whether it is present at all. The potentially unusual kind of minimal consciousness that we here call 'borderline consciousness ' (e.g., in brain organoids) may well turn out to be "in-between consciousness".  But this remains to be seen. The key objectives of the project are to investigate ways of finding out (a) whether, for example, brain organoids or sophisticated AI systems are conscious at all, (b) what this consciousness might consist of (e.g., sentience, valenced experience, self-awareness, etc.), and (c) what ethical and epistemic risks may arise in the science of consciousness related to these borderline cases. 
0 Comments

    Categories

    All

    RSS Feed

Proudly powered by Weebly