Article

Consciousness beyond the human case

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

There is growing interest in the relationship been AI and consciousness. Joseph LeDoux and Jonathan Birch thought it would be a good moment to put some of the big questions in this area to some leading experts. The challenge of addressing the questions they raised was taken up by Kristin Andrews, Nicky Clayton, Nathaniel Daw, Chris Frith, Hakwan Lau, Megan Peters, Susan Schneider, Anil Seth, Thomas Suddendorf, and Marie Vanderkerckhoeve.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... If we want to define and measure consciousness in machines, we need to distinguish between different dimensions or levels of consciousness and develop evaluation methods that are levelspecific. This calls for an exhaustive investigation of the various kinds of consciousness, including its neurobiological, experiential, and functional components (LeDoux et al., 2023).All three of these aspects of awareness-attention, context, and cognitive state-are profoundly impacted by context. Machines can appear non-conscious in some contexts while behaving in ways that imply consciousness in others; this is all dependent on the nature of the task, the environment, and the internal dynamics of processing. ...
... Machines can appear non-conscious in some contexts while behaving in ways that imply consciousness in others; this is all dependent on the nature of the task, the environment, and the internal dynamics of processing. When evaluating machines' levels of consciousness and correctly interpreting behavioral or physiological signals, it is crucial to account for these environmental factors LeDoux et al., 2023).The attempt to define and measure machine consciousness raises ethical and moral concerns regarding the proper treatment of artificial beings. Making robots conscious has the potential to greatly alter their treatment, rights, and responsibilities. ...
Article
Full-text available
The study of machine consciousness has a wide range of potential and problems as it sits at the intersection of ethics, technology, and philosophy. This work explores the deep issues related to the effort to comprehend and maybe induce awareness in machines. Technically, developments in artificial intelligence, neurology, and cognitive science are required to bring about machine awareness. True awareness is still a difficult to achieve objective, despite significant progress being made in creating AI systems that are capable of learning and solving problems. The implications of machine awareness are profound in terms of ethics. Determining a machine's moral standing and rights would be crucial if it were to become sentient. It is necessary to give careful attention to the ethical issues raised by the development of sentient beings, the abuse of sentient machines, and the moral ramifications of turning off sentient technologies. Philosophically, the presence of machine consciousness may cast doubt on our conceptions of identity, consciousness, and the essence of life. It could cause us to reevaluate how we view mankind and our role in the cosmos. It is imperative that machine awareness grow responsibly in light of these challenges. The purpose of this study is to provide light on the present status of research, draw attention to possible hazards and ethical issues, and offer recommendations for safely navigating this emerging subject. We want to steer the evolution of machine consciousness in a way that is both morally just and technologically inventive by promoting an educated and transparent discourse.
... These human-like capabilities raise profound questions about the nature of artificial intelligence (AI) and, in particular, whether AI is capable of having subjective experiences or 'phenomenal consciousness' (Nagel 1974, Chalmers 1996. This debate on consciousness in AI has been at the forefront of mainstream media and academic discourse across cognitive science (Shardlow and Przybyła 2022, Chalmers 2023, LeDoux et al. 2023, Wiese 2023. ...
... In summary, our investigation of folk psychological attributions of consciousness revealed that most people are willing to attribute some form of phenomenality to LLMs: only a third of our sample thought that ChatGPT definitely did not have subjective experience, while two-thirds of our sample thought that ChatGPT had varying degrees of phenomenal consciousness. The relatively high rates of consciousness attributions in this sample are somewhat surprising, given that experts in neuroscience and consciousness science currently estimate that LLMs are highly unlikely to be conscious (Butlin et al. 2023, LeDoux et al. 2023). These findings thus highlight a discrepancy between folk intuitions and expert opinions on artificial consciousness-with significant implications for the ethical, legal, and moral status of AI. ...
Article
Full-text available
Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of large language models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here, we consider the question of whether AI could have subjective experiences such as feelings and sensations (‘phenomenal consciousness’). While experts from many fields have weighed in on this issue in academic and public discourse, it remains unknown whether and how the general population attributes phenomenal consciousness to AI. We surveyed a sample of US residents (n = 300) and found that a majority of participants were willing to attribute some possibility of phenomenal consciousness to large language models. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenality—but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions—with potential implications for the legal and ethical status of AI.
... The prospect of artificial forms of consciousness is increasingly gaining traction as a concrete possibility both in the minds of lay people and of researchers in the field of neuroscience, robotics, AI, and their intersection (Butlin et al., 2023;LeDoux et al., 2023). While in the past the idea that an artificial system, like humanoid robots or non-anthropomorphic artefacts capable of emulating human cognitive and interactive features, can instantiate a form of consciousness was mainly promoted by sci-fi literature and movies, today it has much more scientific credit, notwithstanding significant criticisms still raised against it (Dietrich, Fields, Sullins, Van Heuveln, & Zebrowski, 2021). ...
... noetic or autonoetic) in the absence of the fundamental form of anoetic consciousness? And, if so, how can this be understood and explained (LeDoux et al., 2023)? The challenge here is both theoretical and practical. ...
... The issue is also linked with animal consciousness. The default position nowadays is slowly moving towards the consideration that all animals are conscious, in their respective particular ways (Birch et al., 2020;LeDoux et al., 2023). Since our framework is based on axiomatic mathematics, it resonates with the suspension of the natural attitude (Signorelli et al., 2021b;Samit et al., 2019;Hartimo, 2010), questioning hidden assumptions and making them explicit through formal definitions and instantiations of concrete mathematical structures. ...
Article
Full-text available
An algebraic interpretation of multigraph networks is introduced in relation to conscious experience, brain and body. These multigraphs have the ability to merge by an associative binary operator , accounting for biological composition. We also study a mathematical formulation of splitting layers, resulting in a formal analysis of the transition from conscious to non-conscious activity. From this construction, we recover core structures for conscious experience, dynamical content and causal constraints that conscious interactions may impose. An important result is the prediction of structural topological changes after conscious interactions. These results may inspire further use of formal mathematics to describe and predict new features of conscious experience while aligning well with formal tries to mathematize phenomenology, phenomenological tradition and applications to artificial consciousness.
... There has recently been an explosion of interest among philosophers and cognitive scientists more broadly in the question of whether and which nonhuman animals, 1 including invertebrates such as octopods and bees, have consciousness (e.g., Tye, 2016;Carruthers, 2019;Birch, 2022;LeDoux et al., 2023). To be clear, what is at issue here is whether such creatures enjoy phenomenal consciousness-the kind of consciousness for which, to adopt Nagel's (1974) expression, there is something that it is like for one to have it. ...
Article
Full-text available
According to what Birch (2022) calls the theory-heavy approach to investigating nonhuman-animal consciousness, we select one of the well-developed theories of consciousness currently debated within contemporary cognitive science and investigate whether animals exhibit the neural structures or cognitive abilities posited by that theory as sufficient for consciousness. Birch argues, however, that this approach is in general problematic because it faces what he dubs the dilemma of demandingness—roughly, that we cannot use theories that are based on the human case to assess consciousness in nonhuman animals and vice versa. We argue here that, though this dilemma may problematize the application of many current accounts of consciousness to nonhuman animals, it does not challenge the use of standard versions of the higher-order thought theory (“HOTT”) of consciousness, according to which a creature is in a conscious mental state just in case it is aware of being in that state via a suitable higher-order thought (“HOT”). We show this in two ways. First, we argue that, unlike many extant theories of consciousness, HOTT is typically motivated by a commonsense, and more importantly, neutral condition on consciousness that applies to humans and animals alike. Second, we offer new empirical and theoretical reasons to think that many nonhuman animals possess the relevant HOTs necessary for consciousness. Considering these issues not only reveals the explanatory power of HOTT and some of its advantages over rival accounts, but also enables us to further extend and clarify the theory.
Article
Full-text available
The concept of mind upload into a machine has become a popular topic that has attracted further investigations by both the community of practice and scholarly researchers. Technological advances in artificial intelligence (AI) have enhanced knowledge about mind upload capabilities. The development of these capabilities has reached an important milestone: let the technology advance further without controlled intervention that addresses ethical and sociological apprehensions or balance these developments through the introduction of proper legislation that acts in the best interests of the individual and society, irrespective of culture and geography. This research recommends that more focus should be on finding human mind upload sociological issues and the laws and regulations associated with these. Involved parties must engage in more effective and visible collaboration and co-operation to put the interests of individuals and society first during the development of mind upload options and solutions.
Article
Full-text available
The study of machine consciousness has a wide range of potential and problems as it sits at the intersection of ethics, technology, and philosophy. This work explores the deep issues related to the effort to comprehend and maybe induce awareness in machines. Technically, developments in artificial intelligence, neurology, and cognitive science are required to bring about machine awareness. True awareness is still a difficult to achieve objective, despite significant progress being made in creating AI systems that are capable of learning and solving problems. The implications of machine awareness are profound in terms of ethics. Determining a machine's moral standing and rights would be crucial if it were to become sentient. It is necessary to give careful attention to the ethical issues raised by the development of sentient beings, the abuse of sentient machines, and the moral ramifications of turning off sentient technologies. Philosophically, the presence of machine consciousness may cast doubt on our conceptions of identity, consciousness, and the essence of life. It could cause us to reevaluate how we view mankind and our role in the cosmos. It is imperative that machine awareness grow responsibly in light of these challenges. The purpose of this study is to provide light on the present status of research, draw attention to possible hazards and ethical issues, and offer recommendations for safely navigating this emerging subject. We want to steer the evolution of machine consciousness in a way that is both morally just and technologically inventive by promoting an educated and transparent discourse.
Article
Could emotions be a uniquely human phenomenon? One prominent theory in emotion science, Lisa Feldman Barrett’s Theory of Constructed Emotion (tce), suggests they might be. The source of the sceptical challenge is that tce links emotions to abstract concepts tracking socio-normative expectations, and other animals are unlikely to have such concepts. Barrett’s own response to the sceptical challenge is to relativize emotion to the perspective of an interpreter, but this is unpromising. A more promising response may be to amend the theory, dropping the commitment to the abstract nature of emotion concepts and allowing that, like olfactory concepts, they have disjunctive sensory groundings. Even if other animals were emotionless, this would not imply they lack morally significant interests. Unconceptualized valenced experiences are a sufficient basis for morally significant interests, and such experiences may occur even in the absence of discrete, constructed emotions.
Article
Full-text available
The ability for self-related thought is historically considered to be a uniquely human characteristic. Nonetheless, as technological knowledge advances, it comes as no surprise that the plausibility of humanoid self-awareness is not only theoretically explored but also engineered. Could the emerging behavioural and cognitive capabilities in artificial agents be comparable to humans? By employing a cross-disciplinary approach, the present essay aims to address this question by providing a comparative overview on the emergence of self-awareness as demonstrated in early childhood and robotics. It argues that developmental psychologists can gain invaluable theoretical and methodological insights by considering the relevance of artificial agents in better understanding the behavioural manifestations of human self-consciousness.
Preprint
Full-text available
Human learning essentially involves embodied interactions with the material world. But our worlds now include increasing numbers of powerful and (apparently) disembodied generative AIs. In what follows we ask how best to understand these new (somewhat "alien", because of their disembodied nature) resources and how to incorporate them in our educational practices. We focus on methodologies that encourage exploration and embodied interactions with 'prepared' material environments, such as the carefully organised settings of Montessori education. Using the Active Inference Framework, we approach our questions by thinking about human learning as epistemic foraging and prediction error minimization. We end by arguing that generative AIs should figure naturally as new elements in prepared learning environments by facilitating sequences of precise prediction error enabling trajectories of self-correction. In these ways we anticipate new synergies between (apparently) disembodied and (essentially) embodied forms of intelligence.
Preprint
Full-text available
Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We administer classic false-belief tasks, widely used to test ToM in humans, to several language models, without any examples or pre-training. Our results show that models published before 2022 show virtually no ability to solve ToM tasks. Yet, the January 2022 version of GPT-3 (davinci-002) solved 70% of ToM tasks, a performance comparable with that of seven-year-old children. Moreover, its November 2022 version (davinci-003), solved 93% of ToM tasks, a performance comparable with that of nine-year-old children. These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models' improving language skills.
Article
Full-text available
Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game ‘‘Pong.’’ Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutesof real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.
Article
Full-text available
‘Sentience’ sometimes refers to the capacity for any type of subjective experience, and sometimes to the capacity to have subjective experiences with a positive or negative valence, such as pain or pleasure. We review recent controversies regarding sentience in fish and invertebrates and consider the deep methodological challenge posed by these cases. We then present two ways of responding to the challenge. In a policy‐making context, precautionary thinking can help us treat animals appropriately despite continuing uncertainty about their sentience. In a scientific context, we can draw inspiration from the science of human consciousness to disentangle conscious and unconscious perception (especially vision) in animals. Developing better ways to disentangle conscious and unconscious affect is a key priority for future research.
Article
Full-text available
Understanding how consciousness arises from neural activity remains one of the biggest challenges for neuroscience. Numerous theories have been proposed in recent years, each gaining independent empirical support. Currently, there is no comprehensive, quantitative and theory-neutral overview of the field that enables an evaluation of how theoretical frameworks interact with empirical research. We provide a bird’s eye view of studies that interpreted their findings in light of at least one of four leading neuroscientific theories of consciousness (N = 412 experiments), asking how methodological choices of the researchers might affect the final conclusions. We found that supporting a specific theory can be predicted solely from methodological choices, irrespective of findings. Furthermore, most studies interpret their findings post hoc, rather than a priori testing critical predictions of the theories. Our results highlight challenges for the field and provide researchers with an open-access website (https://ContrastDB.tau.ac.il) to further analyse trends in the neuroscience of consciousness. Yaron and colleagues collected and classified 412 experiments relating to four leading theories in consciousness research, providing a comprehensive overview of the field and unravelling trends and methodological biases.
Article
Full-text available
It is often said that fear is a universal innate emotion that we humans have inherited from our mammalian ancestors by virtue of having inherited conserved features of their nervous systems. Contrary to this common sense-based scientific point of view, I have argued that what we have inherited from our mammalian ancestors, and they from their distal vertebrate ancestors, and they from their chordate ancestors, and so forth, is not a fear circuit. It is, instead, a defensive survival circuit that detects threats, and in response, initiates defensive survival behaviours and supporting physiological adjustments. Seen in this light, the defensive survival circuits of humans and other mammals can be conceptualized as manifestations of an ancient survival function—the ability to detect danger and respond to it—that may in fact predate animals and their nervous systems, and perhaps may go back to the beginning of life. Fear, on the other hand, from my perspective, is a product of cortical cognitive circuits. This conception is not just of academic interest. It also has practical implications, offering clues as to why efforts to treat problems related to fear and anxiety are not more effective, and what might make them better. This article is part of the theme issue ‘Systems neuroscience through the lens of evolutionary theory’.
Article
Full-text available
DARPA formulated the Explainable Artificial Intelligence (XAI) program in 2015 with the goal to enable end users to better understand, trust, and effectively manage artificially intelligent systems. In 2017, the four-year XAI research program began. Now, as XAI comes to an end in 2021, it is time to reflect on what succeeded, what failed, and what was learned. This article summarizes the goals, organization, and research progress of the XAI Program.
Article
Full-text available
We define a cognitive system as a system that can learn, and adopt an evolutionary-transition-oriented framework for analysing different types of neural cognition. This enables us to classify types of cognition and point to the continuities and discontinuities among them. The framework we use for studying evolutionary transitions in learning capacities focuses on qualitative changes in the integration, storage and use of neurally processed information. Although there are always grey areas around evolutionary transitions, we recognize five major neural transitions, the first two of which involve animals at the base of the phylogenetic tree: (i) the evolutionary transition from learning in non-neural animals to learning in the first neural animals; (ii) the transition to animals showing limited, elemental associative learning, entailing neural centralization and primary brain differentiation; (iii) the transition to animals capable of unlimited associative learning, which, on our account, constitutes sentience and entails hierarchical brain organization and dedicated memory and value networks; (iv) the transition to imaginative animals that can plan and learn through selection among virtual events; and (v) the transition to human symbol-based cognition and cultural learning. The focus on learning provides a unifying framework for experimental and theoretical studies of cognition in the living world. This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens’.
Article
Full-text available
Metacognition – the ability to represent, monitor and control ongoing cognitive processes – helps us perform many tasks, both when acting alone and when working with others. While metacognition is adaptive, and found in other animals, we should not assume that all human forms of metacognition are gene-based adaptations. Instead, some forms may have a social origin, including the discrimination, interpretation, and broadcasting of metacognitive representations. There is evidence that each of these abilities depends on cultural learning and therefore that cultural selection might shape human metacognition. The cultural origins hypothesis is a plausible and testable alternative that directs us towards a substantial new programme of research.
Article
Full-text available
The focus of this opinion is on the key features of sentience in animals which can experience different states of welfare, encapsulated by the new term ‘welfare-aligned sentience’. This term is intended to exclude potential forms of sentience that do not enable animals in some taxa to have the subjective experiences which underlie different welfare states. As the scientific understanding of key features of sentience has increased markedly during the last 10 to 15 years, a major purpose here is to provide up-to-date information regarding those features. Eleven interconnected statements about sentience-associated body functions and behaviour are therefore presented and explained briefly. These statements are sequenced to provide progressively more information about key scientifically-supported attributes of welfare-aligned sentience, leading, in their entirety, to a more comprehensive understanding of those attributes. They are as follows: (1) Internal structure–function interactions and integration are the foundations of sentience; (2) animals posess a capacity to respond behaviourally to a range of sensory inputs; (3) the more sophisticated nervous systems can generate subjective experiences, that is, affects; (4) sentience means that animals perceive or experience different affects consciously; (5) within a species, the stage of neurobiological development is significant; (6) during development the onset of cortically-based consciousness is accompanied by cognitively-enhanced capacities to respond behaviourally to unpredictable postnatal environments; (7) sentience includes capacities to communicate with others and to interact with the environment; (8) sentience incorporates experiences of negative and positive affects; (9) negative and positive affective experiences ‘matter’ to animals for various reasons; (10) acknowledged obstacles inherent in anthropomorphism are largely circumvented by new scientific knowledge, but caution is still required; and (11) there is increasing evidence for sentience among a wider range of invertebrates. The science-based explanations of these statements provide the foundation for a brief definition of ‘welfare-aligned sentience’, which is offered for consideration. Finally, it is recommended that when assessing key features of sentience the same emphasis should be given to positive and negative affective experiences in the context of their roles in, or potential impacts on, animal welfare.
Article
Full-text available
Significance In today’s world, indirect exposure to threatening situations is more common than ever, as illustrated by footage of terror and disaster in social media. How do such social threat learning experiences shape our decisions? We found that learning about threats from both observation and verbal information strongly influenced decision making. As with learning from our own experience, this influence could be either adaptive or maladaptive depending on whether the social information provided accurate expectations about the environment. Our findings can help explain both adaptive and pathological behaviors resulting from the indirect exposure to threatening events.
Article
Full-text available
Is activity in prefrontal cortex (PFC) critical for conscious perception? Major theories of consciousness make distinct predictions about the role of PFC, providing an opportunity to arbitrate between these views empirically. Here we address three common misconceptions: (1) PFC lesions do not affect subjective perception; (2) PFC activity does not reflect specific perceptual content; and (3) PFC involvement in studies of perceptual awareness is solely driven by the need to make reports required by the experimental tasks rather than subjective experience per se. These claims are incompatible with empirical findings, unless one focuses only on studies using methods with limited sensitivity. The literature highlights PFC's essential role in enabling the subjective experience in perception, contra the objective capacity to perform visual tasks; conflating the two can also be a source of confusion. Dual Perspectives Companion Paper: Are the Neural Correlates of Consciousness in the Front or in the Back of the Cerebral Cortex? Clinical and Neuroimaging Evidence, by Melanie Boly, Marcello Massimini, Naotsugu Tsuchiya, Bradley R. Postle, Christof Koch, and Giulio Tononi
Article
Full-text available
Significance Here we investigated a key question regarding how the brain resolves conflicting information—specifically, does binocular rivalry competition require conscious awareness of interocular conflicting information? We first showed that a chromatic grating counterphase flickering at 30 Hz became invisible but could induce a significant tilt aftereffect and orientation-selective adaptation. The invisible gratings produced significant BOLD activities in the early visual cortex but not in frontoparietal cortical areas. Further experiments revealed that although the pattern information was not consciously perceived, invisible orientation conflict between the two eyes could induce rivalry competition. Thus, visual competition could occur without conscious representation of the conflicting visual inputs, presumably in the sensory cortex with minimal engagement of high-level cortex and related top-down feedback modulations.
Article
Full-text available
In the past, metacognition has been defined very broadly. On the one hand it has been referred to as an implicit process, where awareness need not be involved. On the other hand - the stronger and more interesting sense - metacognitive processes have been used synonymously with introspection, consciousness, and self-reflection. In this chapter, we categorize the large range of existing metacognitive processes into three formal levels: anoetic metacognition, noetic metacognition, and autonoetic metacognition. Judgements that are bound to the current time, or made in the presence of stimuli, are classified as anoetic. Judgements that refer to or relate to internal representations, and are made in the absence of external stimuli, are classified as noetic. But only autonoetic metacognition requires the individual to make judgements about internal representations, and in addition have awareness that the self is intimately involved. While we can clearly distinguish between the three levels of metacognition, we continue toponder two questions: first, is there a way to show that a nonhuman animal, or even a machine like Watson, can - autonoetically - reflect? Second, is a judgement without such self-reflection metacognition at all?
Chapter
Full-text available
Consciousness is at the very core of the human condition. Yet only in recent decades has it become a major focus in the brain and behavioral sciences. Scientists now know that consciousness involves many levels of brain functioning, from brainstem to cortex. The almost seventy articles in this book reflect the breadth and depth of this burgeoning field. The many topics covered include consciousness in vision and inner speech, immediate memory and attention, waking, dreaming, coma, the effects of brain damage, fringe consciousness, hypnosis, and dissociation. Underlying all the selections are the questions, What difference does consciousness make? What are its properties? What role does it play in the nervous system? How do conscious brain functions differ from unconscious ones? The focus of the book is on scientific evidence and theory. The editors have also chosen introductory articles by leading scientists to allow a wide variety of new readers to gain insight into the field. Bradford Books imprint
Article
Full-text available
I argue that the feeling that one is the owner of his or her mental states is not an intrinsic property of those states. Rather, it consists in a contingent relation between consciousness and its intentional objects. As such, there are (a variety of) circumstances, varying in their interpretive clarity, in which this relation can come undone. When this happens, the content of consciousness still is apprehended, but the feeling that the content “belongs to me” no longer is secured. I discuss the implications of a mechanism enabling personal ownership for understanding a variety of clinical syndromes as well normal mental function.
Article
Full-text available
The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, correlates are not enough if we are to understand even basic neurological fact; nor are they of much help in cases where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need a theory of consciousness that specifies what experience is and what type of physical systems can have it. Integrated Information Theory (IIT) does so by starting from conscious experience via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience, and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. IIT explains a range of clinical and laboratory findings, makes testable predictions, and extrapolates to unusual conditions. The theory vindicates some panpsychist intuitions - consciousness is an intrinsic, fundamental property, is graded, is common among biological organisms, and even some very simple systems have some. However, unlike panpsychism, IIT implies that not everything is conscious, for example group of individuals or feed forward networks. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.
Book
Full-text available
There exists an undeniable chasm between the capacities of humans and those of animals. Our minds have spawned civilizations and technologies that have changed the face of the Earth, whereas even our closest animal relatives sit unobtrusively in their dwindling habitats. Yet despite longstanding debates, the nature of this apparent gap has remained unclear. What exactly is the difference between our minds and theirs? In The Gap, psychologist Thomas Suddendorf provides a definitive account of the mental qualities that separate humans from other animals, as well as how these differences arose. Drawing on two decades of research on apes, children, and human evolution, he surveys the abilities most often cited as uniquely human—language, intelligence, morality, culture, theory of mind, and mental time travel—and finds that two traits account for most of the ways in which our minds appear so distinct: Namely, our open-ended ability to imagine and reflect on scenarios, and our insatiable drive to link our minds together. These two traits explain how our species was able to amplify qualities that we inherited in parallel with our animal counterparts; transforming animal communication into language, memory into mental time travel, sociality into mind reading, problem solving into abstract reasoning, traditions into culture, and empathy into morality. Suddendorf concludes with the provocative suggestion that our unrivalled status may be our own creation—and that the gap is growing wider not so much because we are becoming smarter but because we are killing off our closest intelligent animal relatives. Weaving together the latest findings in animal behavior, child development, anthropology, psychology, and neuroscience, this book will change the way we think about our place in nature. A major argument for reconsidering what makes us human, The Gap is essential reading for anyone interested in our evolutionary origins and our relationship with the rest of the animal kingdom. “A provocative and entertaining gem of a book.” —Simon Baron-Cohen “Beautifully written, well researched and thought provoking… I found it fascinating and strongly recommend it to everyone who is curious as to how we have evolved to become the dominant species in the world today.” —Jane Goodall “sure-handed, fascinating book” —Scientific American Mind “Fascinating…enjoyable..would make [a] marvellous gift” —Nature for more info visit http://thegap.psy.uq.edu.au/
Article
Full-text available
This article addresses some open questions about the self, development, consciousness and memory, especially episodic memory, and is meant to make an attempt to clarify on a descriptive manner these phenomena and especially the relationship between them. In particular, cognitive child development of memory and current theorizing on semantic and episodic memory and related developmental states of consciousness teaches us here to see how different levels of development of the self, identity and memory relate to the ontogenetic development of different stages of consciousness of being in the world. A gradual distinction becomes outlined: from a rudimentary state of autonomic awakeness or unknowing consciousness as a biological adaptive function with a first sort of “self-experience” already apparent at an anoetic level of consciousness relying on implicit experiential and procedural memory, towards “knowing consciousness”, including “noetic” and “autonoetic” consciousness based on semantic and episodic memory systems.
Article
Full-text available
Visual self-recognition is often controversially cited as an indicator of self-awareness and assessed with the mirror-mark test. Great apes and humans, unlike small apes and monkeys, have repeatedly passed mirror tests, suggesting that the underlying brain processes are homologous and evolved 14-18 million years ago. However, neuroscientific, developmental, and clinical dissociations show that the medium used for self-recognition (mirror vs photograph vs video) significantly alters behavioral and brain responses, likely due to perceptual differences among the different media and prior experience. On the basis of this evidence and evolutionary considerations, we argue that the visual self-recognition skills evident in humans and great apes are a byproduct of a general capacity to collate representations, and need not index other aspects of self-awareness.
Article
Full-text available
Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word 'consciousness' has no well-defined meaning, it is used to refer to aspects of human and animal information processing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual- machine architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly 'architecture-based' concepts. Understanding how human-like robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of 'qualia' turns out to be an 'architecture-based' concept, while individual qualia concepts are 'architecture-driven'.
Article
Full-text available
Describes laboratory and clinical attempts to relate different memory systems (procedural, semantic, and episodic) to corresponding varieties of consciousness (anoetic, noetic, and autonoetic). The case of a young adult male amnesic patient is described. The S suffered a closed head injury that left him without autonoetic consciousness. This deficit is manifested in his amnesia for personal events and his impaired awareness of subjective time. Two simple experiments investigated recall and recognition by a total of 89 normal undergraduates to further examine autonoetic consciousness as the necessary correlate of episodic memory. Results show that the distinction between knowing and remembering previous occurrences of events is meaningful to people, that people can make corresponding judgments about their memory performance, and that these judgments vary systematically with the conditions under which retrieved information takes place. (French abstract) (71 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Higher-order theories of consciousness argue that conscious awareness crucially depends on higher-order mental representations that represent oneself as being in particular mental states. These theories have featured prominently in recent debates on conscious awareness. We provide new leverage on these debates by reviewing the empirical evidence in support of the higher-order view. We focus on evidence that distinguishes the higher-order view from its alternatives, such as the first-order, global workspace and recurrent visual processing theories. We defend the higher-order view against several major criticisms, such as prefrontal activity reflects attention but not awareness, and prefrontal lesion does not abolish awareness. Although the higher-order approach originated in philosophical discussions, we show that it is testable and has received substantial empirical support.
Article
Full-text available
Synesthesia provides an elegant model to investigate neural mechanisms underlying individual differences in subjective experience in humans. In grapheme-color synesthesia, written letters induce color sensations, accompanied by activation of color area V4. Competing hypotheses suggest that enhanced V4 activity during synesthesia is either induced by direct bottom-up cross-activation from grapheme processing areas within the fusiform gyrus, or indirectly via higher-order parietal areas. Synesthetes differ in the way synesthetic color is perceived: "projector" synesthetes experience color externally colocalized with a presented grapheme, whereas "associators" report an internally evoked association. Using dynamic causal modeling for fMRI, we show that V4 cross-activation during synesthesia was induced via a bottom-up pathway (within fusiform gyrus) in projector synesthetes, but via a top-down pathway (via parietal lobe) in associators. These findings show how altered coupling within the same network of active regions leads to differences in subjective experience. Our findings reconcile the two most influential cross-activation accounts of synesthesia.
Article
Full-text available
This essay provides an overview of evolutionary levels of consciousness, with a focus on a continuum of consciousness: from primarily affective to more advanced cognitive forms of neural processing-from anoetic (without knowledge) consciousness based on affective feelings, elaborated by brain networks that are subcortical- and can function without neocortical involvement, to noetic (knowledge based) and autonoetic (higher reflective mental) processes that permits conscious awareness. An abundance of such mind-brain linkages have been established using standard neuropsychological and brain-imaging procedures. Much of the characterization of human mental landscapes has been achieved with long accepted psychometric procedures that often do not adequately tap the lived anoetic experiential phenomenological aspects of mind. Without an understanding of affective based anoetic forms of consciousness, an adequate characterization of the human mind may never be achieved. A full synthesis will require us to view mental-experiential processes concurrently at several distinct neurophysiological levels, including foundational affective-emotional issues that are best probed with cross-species affective neuroscience strategies. This essay attempts to relate these levels of analysis to the neural systems that constitute lived experience in the human mind.
Article
Full-text available
Social life has costs associated with competition for resources such as food. Food storing may reduce this competition as the food can be collected quickly and hidden elsewhere; however, it is a risky strategy because caches can be pilfered by others. Scrub jays (Aphelocoma coerulescens) remember 'what', 'where' and 'when' they cached. Like other corvids, they remember where conspecifics have cached, pilfering them when given the opportunity, but may also adjust their own caching strategies to minimize potential pilfering. To test this, jays were allowed to cache either in private (when the other bird's view was obscured) or while a conspecific was watching, and then recover their caches in private. Here we show that jays with prior experience of pilfering another bird's caches subsequently re-cached food in new cache sites during recovery trials, but only when they had been observed caching. Jays without pilfering experience did not, even though they had observed other jays caching. Our results suggest that jays relate information about their previous experience as a pilferer to the possibility of future stealing by another bird, and modify their caching strategy accordingly.
Article
Full-text available
Discussions of the evolution of intelligence have focused on monkeys and apes because of their close evolutionary relationship to humans. Other large-brained social animals, such as corvids, also understand their physical and social worlds. Here we review recent studies of tool manufacture, mental time travel, and social cognition in corvids, and suggest that complex cognition depends on a "tool kit" consisting of causal reasoning, flexibility, imagination, and prospection. Because corvids and apes share these cognitive tools, we argue that complex cognitive abilities evolved multiple times in distantly related species with vastly different brain structures in order to solve similar socioecological problems.
Preprint
I introduce an empirically-grounded version of a higher-order theory of conscious perception. Traditionally, theories of consciousness either focus on the global availability of conscious information, or take conscious phenomenology as a brute fact due to some biological or basic representational properties. Here I argue instead that the key to characterizing the consciousness lies in its connections to belief formation and epistemic justification on a subjective level.
Article
Whether animals have subjective experiences about the content of their sensory input, i.e., whether they are aware of stimuli, is a notoriously difficult question to answer. If consciousness is present in animals, it must share fundamental characteristics with human awareness. Working memory and voluntary/endogenous attention are suggested as diagnostic features of conscious awareness. Behavioral evidence shows clear signatures of both working memory and voluntary attention as minimal criterium for sensory consciousness in mammals and birds. In contrast, reptiles and amphibians show no sign of either working memory or volitional attention. Surprisingly, some species of teleost fishes exhibit elementary working memory and voluntary attention effects suggestive of possibly rudimentary forms of subjective experience. With the potential exception of honeybees, evidence for conscious processing is lacking in invertebrates. These findings suggest that consciousness is not ubiquitous in the animal kingdom but also not exclusive to humans. The phylogenetic gap between animal taxa argues that evolution does not rely on specific neural substrates to endow distantly related species with basic forms of consciousness.
Article
Conscious experiences involve subjective qualities, such as colours, sounds, smells and emotions. In this Perspective, we argue that these subjective qualities can be understood in terms of their similarity to other experiences. This account highlights the role of memory in conscious experience, even for simple percepts. How an experience feels depends on implicit memory of the relationships between different perceptual representations within the brain. With more complex experiences such as emotions, explicit memories are also recruited. We draw inspiration from work in machine learning as well as the cognitive neuroscience of learning and decision making to make our case and discuss how the account could be tested in future experiments. The resulting findings might help to reveal the functions of subjective experience and inform current theoretical debates on consciousness. Conscious experiences range from simple experiences of colour to rich experiences that combine sensory input and bodily states. In this Perspective, Lau and colleagues propose that simple experiences depend on similarity encoded in implicit memory and that complex experiences also require replay of explicit memory.
Article
Recent years have seen a blossoming of theories about the biological and physical basis of consciousness. Good theories guide empirical research, allowing us to interpret data, develop new experimental techniques and expand our capacity to manipulate the phenomenon of interest. Indeed, it is only when couched in terms of a theory that empirical discoveries can ultimately deliver a satisfying understanding of a phenomenon. However, in the case of consciousness, it is unclear how current theories relate to each other, or whether they can be empirically distinguished. To clarify this complicated landscape, we review four prominent theoretical approaches to consciousness: higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory. We describe the key characteristics of each approach by identifying which aspects of consciousness they propose to explain, what their neurobiological commitments are and what empirical data are adduced in their support. We consider how some prominent empirical debates might distinguish among these theories, and we outline three ways in which theories need to be developed to deliver a mature regimen of theory-testing in the neuroscience of consciousness. There are good reasons to think that the iterative development, testing and comparison of theories of consciousness will lead to a deeper understanding of this most profound of mysteries. Various theories have been developed for the biological and physical basis of consciousness. In this Review, Anil Seth and Tim Bayne discuss four prominent theoretical approaches to consciousness, namely higher-order theories, global workspace theories, re-entry and predictive processing theories and integrated information theory.
Article
In this My word, Joseph LeDoux explores what the emotional lives of other mammals might be like. He proposes that better understanding of the brain mechanisms of emotional consciousness in humans might shed light on the kinds of conscious capacities that might be possible in non-human primates and non-primate mammals, given the kinds of brains they possess.
Chapter
How can we determine if AI is conscious? The chapter begins by illustrating that there are potentially very serious real-world costs to getting facts about AI consciousness wrong. It then proposes a provisional framework for investigating artificial consciousness that involves several tests or markers. One test is the AI Consciousness Test, which challenges an AI with a series of increasingly demanding natural-language interactions. Another test is based on the Integrated Information Theory, developed by Giulio Tononi and others, and considers whether a machine has a high level of “integrated information.” A third test is a Chip Test, where speculatively an individual’s brain is gradually replaced with durable microchips. If this individual being tested continues to report having phenomenal consciousness, the chapter argues that this could be a reason to believe that some machines could have consciousness.
Article
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
Chapter
In this chapter, the author develops biological naturalism as a theory of consciousness. Biological naturalism is the name given to the approach to what is traditionally called “the mind-body problem”. The chapter gives a definition of consciousness, a brief account of some of its most important structural features, and a general statement of its relations to the brain and other parts of the real world. It also discusses a few objections to biological naturalism from the point of view of the philosophical tradition. It is important to emphasize that one can have epistemically objective knowledge of a domain that is ontologically subjective. It is for this reason that an epistemically objective science of ontologically subjective consciousness is possible. Three features of consciousness are qualitativeness, subjectivity, and unity. One can make the causal power of subjective consciousness perfectly consistent with its causal functioning as a natural neurobiological, and therefore electrochemical, set of processes.
Chapter
The chapter tackles the placement of self-reflective consciousness amongst the numberless gradations by Darwin. Discussions of self-consciousness inevitably lead to Descartes' dictum, "I think, therefore I am". The goal is a rapprochement between this view and the Cartesian view, emphasizing this kind of consciousness applicable only to humans. Descartes maintained that animals are unable to engage in self-reflection. Negative results of various ape language projects and broad advances in animal cognition suggest that Descartes was right about the uniqueness of language but that he was wrong about animal's capacity for thought and self-reflection. There is abundant evidence that nonhuman pirates can form representations and use them to solve problems. The concept of autonoetic consciousness, as Tulving calls it, seemed close to the construct of self-reflective consciousness and metacognition which was the concern. Thus, instead of focusing on language, more fundamental capabilities are considered-the origins of self-reflective consciousness.
Article
Where does meaning enter the picture in artificial intelligence? How can we say that a machine possesses understanding? Where, and how, does such understanding happen? These are among the deepest and hardest questions faced by the field of artificial intelligence, which, as many claim, has not yielded much about them so far. But some results may be just around the corner to some, and that group includes Douglas Hofstadter and the Fluid Analogies Research Group. They have been developing some...
Article
Procedures used to train laboratory animals often incorporate operant learning
Chapter
This chapter contains section titled: Biological Naturalism as Scientifically Sophisticated Common Sense Objections to Biological Naturalism from the Point of View of the Philosophical Tradition Conclusion Biological Naturalism as Scientifically Sophisticated Common Sense Objections to Biological Naturalism from the Point of View of the Philosophical Tradition Conclusion
Article
Sharing a public language facilitates particularly efficient forms of joint perception and action by giving interlocutors refined tools for directing attention and aligning conceptual models and action. We hypothesized that interlocutors who flexibly align their linguistic practices and converge on a shared language will improve their cooperative performance on joint tasks. To test this prediction, we employed a novel experimental design, in which pairs of participants cooperated linguistically to solve a perceptual task. We found that dyad members generally showed a high propensity to adapt to each other's linguistic practices. However, although general linguistic alignment did not have a positive effect on performance, the alignment of particular task-relevant vocabularies strongly correlated with collective performance. In other words, the more dyad members selectively aligned linguistic tools fit for the task, the better they performed. Our work thus uncovers the interplay between social dynamics and sensitivity to task affordances in successful cooperation.
Article
The study of animal behaviour has been dominated by two general models. According to the mechanistic stimulus-response model, a particular behaviour is either an innate or an acquired habit which is simply triggered by the appropriate stimulus. By contrast, the teleological model argues that, at least, some activities are purposive actions controlled by the current value of their goals through knowledge about the instrumental relations between the actions and their consequences. The type of control over any particular behaviour can be determined by a goal revaluation procedure. If the anim al’s performance changes appropriately following an alteration in the value of the goal or reward without further experience of the instrumental relationship, the behaviour should be regarded as a purposive action. On the other hand, the stimulus-response model is more appropriate for an activity whose performance is autonomous of the current value of the goal. By using this assay, we have found that a simple food-rewarded activity is sensitive to reward devaluation in rats following limited but not extended training. The development of this behavioural autonomy with extended training appears to depend not upon the am ount of training per se, but rather upon the fact that the animal no longer experiences the correlation between variations in performance and variations in the associated consequences during overtraining. In agreement with this idea, limited exposure to an instrumental relationship that arranges a low correlation between performance and reward rates also favours the development of behavioural autonomy. Thus, the same activity can be either an action or a habit depending upon the type of training it has received.
Article
This article reviews the literature on learning and memory in the soil-dwelling nematode Caenorhabditis elegans. Paradigms include nonassociative learning, associative learning, and imprinting, as worms have been shown to habituate to mechanical and chemical stimuli, as well as learn the smells, tastes, temperatures, and oxygen levels that predict aversive chemicals or the presence or absence of food. In each case, the neural circuit underlying the behavior has been at least partially described, and forward and reverse genetics are being used to elucidate the underlying cellular and molecular mechanisms. Several genes have been identified with no known role other than mediating behavior plasticity.
Article
The recollection of past experiences allows us to recall what a particular event was, and where and when it occurred, a form of memory that is thought to be unique to humans. It is known, however, that food-storing birds remember the spatial location and contents of their caches. Furthermore, food-storing animals adapt their caching and recovery strategies to the perishability of food stores, which suggests that they are sensitive to temporal factors. Here we show that scrub jays (Aphelocoma coerulescens) remember 'when' food items are stored by allowing them to recover perishable 'wax worms' (wax-moth larvae) and non-perishable peanuts which they had previously cached in visuospatially distinct sites. Jays searched preferentially for fresh wax worms, their favoured food, when allowed to recover them shortly after caching. However, they rapidly learned to avoid searching for worms after a longer interval during which the worms had decayed. The recovery preference of jays demonstrates memory of where and when particular food items were cached, thereby fulfilling the behavioural criteria for episodic-like memory in non-human animals.
Article
The study examined the validity of oral fentanyl self-administration (FSA) as a measure of the chronic nociceptive pain that develops in rats with adjuvant arthritis independently of acute noxious challenges. Arthritic rats self-administered more of a 0.008 mg/ml fentanyl solution (up to 3.4 g/rat per day) than non-arthritic controls (0.5 g/rat per day) and did so with a biphasic time course that reached peak during weeks 3 and 4 after inoculation with Mycobacterium butyricum. The time course paralleled both the disease process and the chronic pain. Continuous infusion of dexamethasone during weeks 3 and 4 via subcutaneous osmotic pumps at 0.0025-0.04 mg/rat per day disrupted the arthritic disease and decreased FSA to a level (i.e. by 65%) similar to that observed in non-arthritic rats. Continuous naloxone (2.5 mg/rat per day) decreased FSA (by 55%) in arthritic but not in non-arthritic animals. Continuous, subcutaneous infusion of fentanyl also decreased arthritic FSA in a manner that varied with dose at 0.04-0.16 mg/rat per day doses, but leveled off at 47% of controls with 0.31 mg/rat per day. The effects of continuous fentanyl on arthritic FSA occurred only with those doses and dose-dependent dynamics with which fentanyl also induced dependence in non-arthritic rats. The findings indicate that pain, rather than the rewarding or dependence-inducing action of fentanyl mediates FSA in arthritic rats. Paralleling patient-controlled analgesic drug intake, FSA offers a specific measure allowing the dynamic effects of neurobiological agents to be studied in this unique animal model of persistent nociceptive pain.
Article
Knowledge of and planning for the future is a complex skill that is considered by many to be uniquely human. We are not born with it; children develop a sense of the future at around the age of two and some planning ability by only the age of four to five. According to the Bischof-Köhler hypothesis, only humans can dissociate themselves from their current motivation and take action for future needs: other animals are incapable of anticipating future needs, and any future-oriented behaviours they exhibit are either fixed action patterns or cued by their current motivational state. The experiments described here test whether a member of the corvid family, the western scrub-jay (Aphelocoma californica), plans for the future. We show that the jays make provision for a future need, both by preferentially caching food in a place in which they have learned that they will be hungry the following morning and by differentially storing a particular food in a place in which that type of food will not be available the next morning. Previous studies have shown that, in accord with the Bischof-Köhler hypothesis, rats and pigeons may solve tasks by encoding the future but only over very short time scales. Although some primates and corvids take actions now that are based on their future consequences, these have not been shown to be selected with reference to future motivational states, or without extensive reinforcement of the anticipatory act. The results described here suggest that the jays can spontaneously plan for tomorrow without reference to their current motivational state, thereby challenging the idea that this is a uniquely human ability.