PresentationPDF Available

Brain and consciousness

Authors:

Abstract

Abstract This is a state-of-the-art report about consciousness, brain structure, brain functionality, language, and their neurological foundations in the human brain. The concept of consciousness in chapter 6 is based primarily on Antonio Damasio’s concept of multilayered mind, and on the concept of the “dual” brain based on the Parkins-Adolphs-Kuo-Squire-Yordanova (PAKSY) model of procedural-declarative representation and processing in the brain, outlined in chapter 5. Chapter 2 gives a short description of the brain anatomy. Chapter 3 describes the brain states and current methods of brain measurement. Chapter 4 deals with the brain functionality of generating images and emotions, mapping and storing them in memory. The second all-important feature in human thinking after consciousness is language, its evolution and neurological foundations are described in chapter 7. Another important exclusively human feature is long-time planning, this is dealt with in chapter 8. Chapter 9 describes the inter-mind cognition as opposed to the cognition of physical and mental reality in chapter 4,5,6 . Chapter 10 presents mathematical and physical models of consciousness, the established IIT, GWT, AST. and the new ingenious Inage & Hebishima model. Chapter 11 is a tentative description of evolution of biological complexity, intelligence, and biological / artificial consciousness Contents 1 Mind structure 2 Brain structure 3 Brain states and brain measurement 4 Maps, emotions and memory 5 The dual brain 6 Consciousness 7 Language 8 Planning 9 Understanding other minds: Theory of Mind 10 Models of consciousness 10.1 Inage & Hebishima model 10.2 Integrated Information Theory (IIT) 10.3 Attention schema theory (AST) 10.4 Global workspace theory (GWT) 11 Evolution of terrestrial biological and artificial complexity, intelligence, consciousness Literature
1
Brain and consciousness
Brain and consciousnessBrain and consciousness
Brain and consciousness
Jan Helm
Technical University Berlin
Email: jan.helm@alumni.tu-berlin.de
Abstract
This is a state-of-the-art report about consciousness, brain structure, brain functionality, language, and their
neurological foundations in the human brain, with some contributions of the author, which are marked as such.
The concept of consciousness in chapter 6 is based on Antonio Damasio’s concept of multilayered mind, on
Baars’ Global-Workspace-Theory, and on the concept of the “dual” brain based on the Parkins-Adolphs-Kuo-
Squire-Yordanova (PAKSY) model of procedural-declarative representation and processing in the brain,
outlined in chapter 5.
Chapter 2 gives a short description of the brain anatomy.
Chapter 3 describes the brain states and current methods of brain measurement.
Chapter 4 deals with the brain functionality of generating images and emotions, mapping and storing them in
memory.
The second all-important feature in human thinking after consciousness is language, its evolution and
neurological foundations are described in chapter 7.
Another important exclusively human feature is long-time planning, this is dealt with in chapter 8.
Chapter 9 describes the inter-mind cognition as opposed to the cognition of physical and mental reality in
chapter 4,5,6 .
Chapter 10 presents mathematical and physical models of consciousness, the established IIT, GWT, AST. and
the new Inage & Hebishima model.
Chapter 11 is a tentative description of evolution of biological complexity, intelligence, and biological /
artificial consciousness
Contents
1 Mind structure
2 Brain structure
3 Brain states and brain measurement
4 Maps, emotions and memory
5 The dual brain
6 Consciousness
7 Language
8 Planning
9 Understanding other minds: Theory of Mind
10 Models of consciousness
10.1 Inage & Hebishima model
10.2 Integrated Information Theory (IIT)
10.3 Attention schema theory (AST)
10.4 Global workspace theory (GWT)
11 Evolution of terrestrial biological and artificial complexity, intelligence, consciousness
Literature
2
1 Mind structure
[JH] , [4] ch8, [4] ch9, [65]P2-1
The rational-physical world-view is a description of the physical (outer) world, in which we live; it consists of
mathematical models, which are continuously tested against reality by means of measurement.
We can perceive this outer world through our senses and through measurement and technical imaging, which
we can regard as “extended senses”, this perception is more or less equal for all humans. The physical stimuli
and the corresponding sensory response are both measurable.
But for us humans, there is another inner world, which we carry inside, which is the reality of our brain. Its
physical-biological structure and functionality is more or less equal among individuals, but there is no sensory
apparatus for it, we perceive it by means of introspection, which is not measurable, it can merely be
communicated by language as a “fuzzy”-quantified or symbolic description. This introspective image of our
inner world is not measurable, and it covers only the consciously accessible part of this reality.
The inner reality encompasses the high-level brain functionality with its four aspects: consciousness, language,
motoric control and sensory imaging.
The high-level brain functionality consists of the interaction of four agents (Damasio 2011 [4]) .
The original concept by Damasio (protoself, core-self, autobiographical self, high-level consciousness) is put on
more solid neurological foundation by identifying the autobiographical self and high-level consciousness with
the procedural and declarative functionality of the dual brain concept (Parkins’ dual brain 2016 [65]) .
Furthermore, it is extended by introducing two important features of the brain functionality, which are largely
ignored in Damasio’s original concept: cerebellum with its sensori-motor cognition as a central part of
protoself, and the limbic system and thalamus as a central part of core-self.
-Protoself maps sensory signals (visual, auditory, olfactory, tactile) from the body and motoric and chemo-
static (hormones) control signals to the body. Protoself forms primordial feelings (primary emotions) as mental
images and communicates them to the core-self.
Protoself is located in
mainly in cerebellum (which has 50% of the brain’s neurons and controls sensor-motoric information and
cognition, reflex learning, routine cognition and motion control)
brain-stem
hypothalamus: control homeostasis by hormones
mid-brain: processing of raw sensory data visual, auditory, whole-body chemo-sensoric and tactile information
Hydranencephalic (deprived of cortex) individuals have normally functioning motorics and sensors (except a
limited movement control) and are able of all primary emotions [4] ch3.
The main components of protoself [4] ch8.5
brain-stem level: interoceptive integration
nucleus tractus solitarius (NTS), parabrachial nucleus (PBN), periaqueductal grey (PAG), area postrema,
hypothalamus, superior colliculus deep layers
cerebral cortex level: interoceptive integration
insular cortex, anterior cingulate cortex
cerebral cortex level: external sensory portals
frontal eyefields, somatosensory cortices
cerebellum level: sensor-motoric integration and autonomous cognition
-Core-self emerges as procedures (dispositions) establishing relationships between the protoself and any map
that represents an object-to-be-known. The relation between protoself and core-self is a narrative sequence of
images, some of which are feelings, others are sensorial images
Core-self (core consciousness) maps and connects
primary emotions from protoself as emotional mental images
external objects, humans, animals, as sensorial mental images
Core-self controls the combined emotional-sensorial learning
Core-self has short-term and long-term memory (in hippocampus)
3
Core-self is located in the limbic system (hippocampus : short-term and long-term memory) + reticular
formation : controls wakefulness and sleep), in superior colliculus (control of behavior toward objects),
thalamus (control of awareness), cingulate cortex (emotion formation and processing) [40]
Schematic of core-self mechanisms [4] ch8.7
-Autobiographical self (extended consciousness) [41] emerges as dispositions linking objects in ones’s
biography with dispositions of core-self into a coherent pattern.
Autobiographical self enables recognition of oneself as a person, it passes the mirror-test.
Human children show self-recognition in the mirror test when aged 18 months, chimpanzees and gorillas as
adults.
Language capability and sense of personal history are limited, roughly at the level of great apes [32] [33]
limited language: sign language up to 1000 words, max. 3 words per sentence, no higher abstraction (no
generalization).
limited sense of autobiographical history, limited sense of both past and future
Autobiographical self controls the visual, spatial. auditory, cortex
Autobiographical self is located in the right cerebral hemisphere and controls the procedural-intuitive
functionality of the PAKSY-model (intuition, spatial awareness, cognition of music, creativity, facial
recognition, art, rhythm) [65] P2-1.4 .
The autobiographical self: neural mechanisms [4] ch9.1
-High-level consciousness controls the high-level mental processes of the human mind, it encompasses:
possession of complex language skills (recursive sentences, arbitrary abstraction levels),
4
strong sense of both past and future, strong sense of autobiographical self and memory,
conscience (personal and social ethics, a precondition for socio-cultural homeostasis),
substantial artistic and scientific creativity: the basis for the evolution of memes and techs, i.e. cultural and
techno-scientific evolution [34]
High-level consciousness is located in the frontal cortex (especially mPFC, responsible for symbolic concept
management and planning) and the left cerebral hemisphere, which controls the declarative-abstract
functionality of the PAKSY-model (speech and language, mathematical computations, rational reasoning,
logical analysis) [65] P2-1.5 .
Consciousness coordination and PMC
The 12 CDR regions are competing for conscious focus, the (left and right) posteromedial cortices (PMC) are
the coordinators of conscious focus [4]ch9.5.
PMC’s activity varies in the different brain states:
-active in some attention states (salience, executive control), non-active in sensory attention and in dorsal
attention state
-increased in DMN mode
-low in REM-sleep (with dreams)
-non-active in slow-wave-sleep (non-REM, dreamless)
-non-active in somnambulant sleep (here cingulate cortex is active [46])
-active in unconscious wakefulness during epileptic absence
-non-active in anesthetized state
PMC, DMN and attention modes
The main DMN (Default Mode Network) network consisting of medial prefrontal cortex, PMC and the angular
gyrus, controls self-referenced brain activities: autobiographical memory, anticipation of future, daydreaming,
moral judgment.
In DMN mode, PMC shows increased activity.
In focus-attention modes (visual, auditory, executive) PMC shows normal activity.
The DMN activities are non-verbal and self-referenced, therefore they can be attributed to the autobiographical
self (presumably right PMC). The focus-attention modes are in general verbal and symbolic, so they can be
attributed to the high-level consciousness (presumably left PMC).
The pattern of neural connections to and from the posteromedial cortices (PMCs), as determined in a study
conducted in the monkey. Abbreviations: dlpfc = dorsolateral prefrontal cortex; fef = frontal eye fields;
vmpfc = ventromedial prefrontal cortex; bf = basal forebrain; claus = claustrum; acc = nucleus accumbens;
amy = amygdala; pag = periaqueductal gray [4] ch9.5.
The four locations of consciousness [4] ch10.2, [65] P3-7
-Brain stem
The brain stem (PAG= periaqueductal gray) is the low-level part of protoself, which controls hormonal control
-Cerebellum
5
Cerebellum is the high-level part of protoself, which controls sensor-motoric information and cognition, reflex
learning
-Thalamus + limbic system
Thalamus is essential for wakefulness, bridges brain stem to cortex, and brings in the inputs with which cortical
maps can be assembled. The limbic system controls emotional-sensorial learning, long-term memory,
wakefulness, and sleep. The thalamus+limbic system is the non-cortical part of core-self.
-Cortex
The insular and somatosensory cortex is the cortical part of protoself.
The cingulate cortex (emotion formation and processing, connection with maps) is the cortical part of core-self.
The PMC is the coordinator of consciousness, the right PMC is the center of autobiographical self and the left
PMC is the center of high-level consciousness.
The four control systems of the brain [65] P3-7, [71] ch4,8,13,15, [JH]
cerebrum control system: subdivided into right hemisphere (autobiographical self, procedural-intuitive
functionality), left hemisphere (high-level consciousness, declarative-abstract functionality)
limbic control system: controls sensorial-emotional learning with emotional valuation
cerebellum control system: controls sensorimotor reflex learning (protoself reflex level)
brainstem control system: controls hormone-based mechanisms (protoself innate level)
6
sensors effectors
brain stem
cerebellum
sensorimotor
image
motoric
sequence
sensorimotor
memory
thalamus
hypothalamus
limbic system
sensorimotor
auditory
visual
retina
cochreal
ganglion
sensorimotor motoric
sequence
emotional-
imagistic
memory
emotion
mental image
cortex declarative
left hemi
cortex procedural
right hemi
episodic-imagistic
memory
symbolic-verbal
memory
corpus
callosum
sensorial-episodic
emotional image
sensorial-imagistic
emotional image
language
comprehension
language
articulation
somatosensory
visual
auditory cortex
motor cortex
somatosensory
visual
auditory
input
motoric
sequence
The brainstem/midbrain control system is innate, e.g. it works with fixed feedback-loops.
The three higher-level control systems work with learning feedback.
The external stimuli and the internal state are processed into a current representation (map) of the outer and
inner world, and are evaluated by association with generated emotions, compared with memory, the current
behavioral pattern is generated as output, and the brain enters possibly a new state. The behavioral pattern is
the output, i.e. the answer to the external and internal situation and is stored in memory together with the
external and internal situation. The behavior changes the external and internal situation, and the new
evaluation by emotions gives the feedback for correction of the output, the neural network is corrected
accordingly: the brain “learns” the optimal behavior. The generated emotions are the valuation of the brain with
the goal of preserving the physical and mental homeostasis.
Qualia and maps [4] ch10.6
Qualia I are sensory maps (visual images, music and sounds) and corresponding emotions.
Qualia II are neural and physical event maps (sequences of events, sequences of movements, personal and non-
personal stories) with corresponding emotions.
7
Autonomic nervous system
The autonomic nervous system (AMS) is a control system that largely unconsciously regulates bodily
functions, such as the heart rate, digestion, respiratory rate, pupillary response, urination, sexual arousal and the
fight-or-flight response.
-parasympathetic NS
The parasympathetic system is responsible for stimulation of "rest-and-digest" or "feed and breed" activities
that occur when the body is at rest, especially after eating, including sexual arousal, salivation, lacrimation
(tears), urination, digestion, and defecation.
-sympathetic NS
The autonomic nervous system functions to regulate the body's unconscious actions. The sympathetic nervous
system's primary process is to stimulate the body's fight or flight response. It is, however, constantly active at a
basic level to maintain homeostasis.
-enteric NS
The enteric nervous system (ENS) is one of the main divisions of the autonomic nervous system (ANS) and
consists of a mesh-like system of neurons that governs the function of the gastrointestinal tract.
The neurons of the enteric nervous system control the motor functions of the system, in addition to the secretion
of gastrointestinal enzymes.
8
2 Brain structure
The human brain consists of approximately 0.9*1011 neurons, and 1014 synapses. It weighs approximately 1.3
kg and uses approximately 20% of the body’s blood flow .
The brain has a specialized blood supply. It has a higher metabolic rate than the rest of the body, and a constant
flow of oxygen and nutrients.
The main parts of the brain are the cerebrum, cerebellum, interbrain (thalamus, hypothalamus), limbic system
(hippocampus, amygdala), brain stem (midbrain, pons, medulla).
COMPONENT
DESCRIPTION
FUNCTION
Cerebrum
(forebrain) Largest portion of brain Center of conscious thought and higher mental functioning
(intelligence, learning, memory)
Cerebral cortex Outer coating of cerebrum; gray matter
(nerve cell bodies)
Contains convolutions (grooves) and elevations (gyri) that
increase brain’s surface area
Inner portion of
brain White matter Location of billions of connections, due to presence of
dendrites and myelinated axons
Lobes
Frontal
Located at front of skull, forehead
Location of higher mental processes (intelligence,
motivation, mood, aggression, planning); site for verbal
communication and voluntary control of skeletal muscles
Parietal Between frontal and occipital lobes
Location of skin, taste, and
muscle sensations; speech center;
enables formation of words to express thoughts and
emotions; interprets textures and shapes
Temporal Located at sides of skull
Location of sense of smell and auditory interpretation; stores
auditory and visual experiences; forms thoughts that precede
speech
Occipital Located at back of skull Location of eye movements; integrates visual experiences
Hemispheres
Corpus callosum
Longitudinal fissure divides brain into
right and left halves Connects
hemispheres internally
Diencephalon
(Interbrain)
Thalamus
Located between cerebral hemispheres
and brainstem Located within cerebral
hemispheres
Central relay point for incoming nerve messages ("switching
center”)—consolidates sensory input (especially extremes
and pain); influences mood and body movements; associated
with strong emotions
Hypothalamus Located below thalamus, at base of
cerebrum
Regulates homeostasis—center for body temperature
regulation, hunger, peristalsis, thirst and water balance,
sexual response, and sleep-wake cycle; controls heart rate
and blood vessel diameter; influences pituitary gland
secretions; controls muscles of swallowing, shivering, and
urine release; links nervous and endocrine systems
Limbic system
Hippocampus
LS consists of hippocampus, reticular
formation, amygdala, is located above
the brainstem
Responsible for learning, long-term memory, wakefulness,
and sleep
Limbic system
Amygdala
Amygdala are located left and right
above the hippocampus
Generate emotions, attach them to associations and
responses, and store them as emotional memories
Cerebellum
("little brain”)
Second largest part of the brain (part of
hindbrain); attached to back of
brainstem, below curve of cerebrum.
Connected, via midbrain, to spinal cord
and motor area of cortex
Location of involuntary movement, coordination, muscle
tone, balance, and equilibrium (semicircular canals);
coordinates some voluntary muscles
Brainstem
Midbrain
Smallest and most primitive part of
brain Located at top of brainstem,
below thalamus
Connects cerebral hemispheres with spinal cord Acts as
visual and auditory reflex center; righting reflex located here
9
(mesencephalon)
Pons
Medulla
(oblongata)
Between cerebrum and medulla
Located at floor of skull below
midbrain; connects brain to spinal cord
Carries messages between cerebrum and medulla; acts as
respiratory center to produce normal breathing patterns Vital
for life; descending nerve tracts from brain cross here to
opposite side; contains centers for many body functions
(cardiac, vasomotor; and respiratory center; swallowing,
coughing, and sneezing reflexes)
Main areas of the brain [71] ch2.1
[72]
[71] ch2.1
10
[71] ch2.1
Cerebrum: of the brain’s volume, 80% is the cerebrum. It coordinates sensory data and motor functioning and
governs intelligence, reasoning, learning, memory, and other complex behaviors. The cerebrum is divided into
two layers, four lobes and two halves (hemispheres).
The cerebral lobes are: frontal, parietal, temporal, and occipital.
The frontal lobe is larger in humans than in all other animals. It controls the areas for written and motor
speech, is responsible for high-level of mental functioning, including conception, judgment, planning, speech,
communication and is also involved with motor functions.
The parietal lobe contains the sensory area. It interprets somatosensory input such as touch, taste, pressure,
temperature, pain, and the spatial ability.
The temporal lobe receives and interprets auditory signals, smell signals, and processes language.
The occipital lobe contains the visual areas. It is responsible for visual experience.
The cerebellum is located at the back of the brain. It accounts for approximately 10% of the brain’s volume,
and contains over 50% of the total number of neurons in the brain. The cerebellum is involved in the following
functions: maintenance of balance and posture, coordination of movements, motoric learning, sensorimotor
cognition and speech.
The brainstem connects the cerebral hemispheres with the spinal cord. It includes the midbrain, pons, and
medulla .
The midbrain is located at the very top of the brainstem and functions as an important reflex center. Visual and
auditory reflexes are integrated here.
The pons contains nerve tracts that carry messages between the cerebrum and medulla, and has respiratory
centers that control the breathing pattern.
The medulla lies just below the pons and is continuous with the spinal cord. The medulla contains centers for
vital body functions, including the cardiac center (heart rate), vasomotor center (blood vessels and blood
pressure), and respiratory center. The medulla controls innate reflexes, such as swallowing, coughing, sneezing,
hiccupping, and vomiting.
The interbrain (diencephalon) consists of thalamus and hypothalamus. The thalamus is the central relay station
for sensory input, it influences mood and body movements and generates emotions. The hypothalamus
regulates homeostasis: temperature regulation, hunger, peristalsis, water balance, sexual response, sleep-wake
cycle, heart rate, control gland secretions, swallowing, shivering, urine release.
The limbic system consists of hippocampus, reticular formation, amygdala. Hippocampus is responsible for
learning, long-term memory; reticular formation is responsible for wakefulness, and sleep. The left and right
amygdala generate emotions, attach them to associations and responses, and store them as emotional memories.
The spinal cord has two main functions: to conduct impulses to and from the brain and to act as a reflex
center.
Somatosensory brain areas and processes [71] ch2.5
The somatosensory system controls the tactile cognition.
11
Somatosensory cortical areas
Somatosensory signaling from the mechanoreceptors to the cortex
The somatosensory signals originate in the mechanoreceptors and propagate through the brainstem and
thalamus to the posterior and parietal cortex and are memorized in the hippocampus and amygdala.
Chemo-sensory brain areas and processes [71] ch2.9
The chemo-sensory system controls the gustatory (taste) and olfactory (smell) sensorial processing.
12
Neural pathway for taste (gustatory sense)
Olfactory signaling from receptors to the cortex [72]
13
The smell signals originate in the microvilli, propagate to the hippocampus, the hypothalamus and amygdala,
and finally to the orbital-frontal cortex.
Auditory system [71] ch2.13
Neural pathway of the auditory system from the cochlea to the auditory cortex
The auditory sequences originate as frequency-distributed signals in the cochlea, and propagate to the superior
temporal (auditory) cortex, and are stored as frequency-amplitude images.
Visual system [71] ch2.15
Visual neural pathway to the visual cortex [72]
14
The visual images are transmitted from the retina to the lateral geniculate nucleus and from there to the
primary visual cortex.
From the visual cortex the visual image is transferred across the dorsal and the ventral pathway for further
processing.
[72]
The dorsal pathway (green) and ventral pathway (purple) originate from primary visual cortex.
V1 transmits information to two primary pathways, called the ventral stream and the dorsal stream.
The ventral stream begins in the main visual cortex, and goes to the inferior temporal cortex (IT cortex).
The ventral stream is associated with form recognition and object representation, and with storage of long-term
memory.
The dorsal stream begins with V1, goes through Visual area V2, then to the dorsomedial area (DM/V6) and
medial temporal area (MT/V5) and to the posterior parietal cortex.
The dorsal stream is associated with motion, representation of object locations, and control of the eyes and
arms.
15
3 Brain states and brain measurement
Studying the mind [JH], [4] ch1
Knowledge about the mind can be gained by self-inspection, studying human behavior, physical brain
examination, study of mind in animals and human children
-Self-inspection
Self-inspection is to observe one’s own mind functionality as recorded by Autobiographical self. Here one has
to take into account the filtering by the higher-level mind instances in order to assess the functionality on the
protoself and the core-self level.
-Studying human behavior
Studying human behavior, means to apply our innate capability of “emulating” behavior of others in ourselves,
commonly known as “human insight”. In this category falls also the study of mind functionality after brain
lesions.
-Physical brain examination
Physical brain examination uses the modern physical measurement techniques applied to the brain: imaging of
brain in action (MRT, PET, electroencephalography EEG, magnetoencephalography MEG), direct electrical
neuron stimulation and reading.
-Study of mind functionality in animals and human children
Here are to mention the remarkable research results with primate language learning [32, 33], also in comparison
with human children [33].
Brain measurement and imaging [JH] , [60]8, [60]9
There are basically two methods of brain measurement: brain wave measurement with electroencephalography
EEG resp. magnetoencephalography MEG , and brain imaging of metabolism by fMRI and PET.
Brain EEG/MEG measurement
EEG measures voltage with the electrodes placed along the scalp. Electrocorticography, involving invasive
electrodes, is sometimes called intracranial EEG. EEG measures voltage fluctuations resulting from ionic (Na+
K+) current within the neurons of the brain. EEG measures event-related potentials (ERPs) by averaging EEG
responses that are time-locked to stimuli. A typical adult human EEG signal is about 10 µV to 100 µV in
amplitude when measured from the scalp.
Magnetoencephalography (MEG) maps brain activity by recording magnetic fields produced by electrical
currents in the brain, using very sensitive magnetometers, normally arrays of (helium-cooled) SQUIDs
(superconducting Josephson-junctions) , or using spin exchange relaxation-free (SERF) magnetometers
(without helium-cooling). MEG can resolve events with a precision of 10 milliseconds or faster. In order to
achieve in-depth source localization, MEG is combined with fMRI-BOLD.
Brain waves
The most frequent spectral components in EEG/MEG are [112]
• Gamma rhythm (30–100 Hz) is widely accepted to represent the binding of different populations of neurons to
perform a certain function.
• Beta rhythm (12–30 Hz) is associated with active attention and focus on the exterior world. Beta is also
present during states of tension, anxiety, fear and alarm.
• Alpha rhythm (8–12 Hz) is the basic rhythm amplified by closing the eyes and by relaxation. Consciousness is
alert but unfocused, or focused on the interior world.
• Theta rhythm (4–8 Hz) are usually associated with drowsy, near-unconscious states, such as the threshold
period just before waking or sleeping. Awake theta is associated with relaxed, meditative, and creative states.
• Delta rhythm (0.5-3Hz) is associated with deep sleep or unconsciousness.
• Slow cortical potentials (SCP) represents activation of a group of neurons approximately every 10 s. Note that
traditional EEG amplifiers discard all rhythms slower than 0.5 or 1.5 Hz.
• SMR sensorimotor rhythm wave (13-15Hz) is an idle rhythm of synchronized electric brain activity, which
appears over the sensorimotor cortex.
Brain rate [60] ch8
Brain rate fb is an effective measure of amplitude of overall activity in the brain.
Brain rate is defined as the mean frequency of brain oscillations weighted over all bands of the EEG potential
(or power) spectrum, the brain rate (fb) is calculated as:
16
where the index i denotes the frequency band (for delta i=1, for theta i=2, etc.) and Vi is the corresponding mean
amplitude of the electric potential (or power).
Brain rates of negative emotional states with eyes open (EO), and eyes closed (EC)
Brain rates of sleep states based on EEG and heart rate variability (HRV)
Brain rates of normal brain states
Brain imaging [JH]
-fMRI
17
Functional MRI (fMRI) measures brain activity via magnetic resonance by detecting changes associated with
blood flow. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled, it
uses the blood-oxygen-level dependent (BOLD) contrast in magnetic resonance image.
The time resolution is ~1s, spatial resolution ~1mm.
-PET
Positron emission tomography (PET) is an imaging technique that uses gamma-emitting substances
(radiotracers) to visualize and measure physiological processes like blood flow and local chemical
composition. Gamma rays are detected by gamma cameras in a PET-scanner to form a three-dimensional
image.
PET imaging with oxygen O15 indirectly measures blood flow to the brain. PET imaging with 18F-
fluordesoxyglucose measures local glucose level in the brain.
States of consciousness
The human brain can be described mathematically as a state machine, where a state is characterized by a range
of electrical (voltage frequency spectrum) and physiological (blood flow, neurotransmitter concentrations)
parameters.
Illustration of the two major components of consciousness: the level of consciousness (arousal or wakefulness)
and the content of consciousness (awareness) in normal physiological states, where the level and the content of
consciousness are generally positively correlated, and in pathological states or pharmacological coma [60] ch2.
Disorders of Consciousness [60] ch2
-Brain death
During brain death, there is Irreversible loss of all reflexes of the brainstem , absence of electrical brain activity
by electroencephalogram (EEG) and absence of cerebral blood flow
Functionality: no electrical brain activity
EEG/MEG: no effect
Metabolism: no cerebral blood flow
-Coma
Coma is a state of non-responsiveness in which the patients lie with eyes closed and cannot be awakened even
when intensively stimulated (Plum and Posner1983).
Functionality: eyes closed, no wakefulness, no sleep-wake cycles, irregular breathing, present reflexes: eye
reflexes, ear reflexes, pharyngeal reflex;
injury to the cortex or the reticular formation (RF) or both.
EEG/MEG: constant unresponsive alpha wave (8-13Hz), depressed reflexes to pain stimulation, auditory ERP’s
present, spindle (0.5s 11-5Hz) and triphasic (negative-positive-negative pulse high amplitude (~70μV) )
patterns
Metabolism: 30-50%
-Vegetative state
18
The vegetative state (VS), or unresponsive wakefulness syndrome (Laureys et al 2010 [113]), is defined by eyes
opening either spontaneously or after stimulation and lack of awareness.
Functionality: eyes opening/closing, sleep–wake cycles preesent, autonomous functions present, breathing
normal, no verbalization, no awareness, reflexive movements: blinking, swallowing, smiling
EEG/MEG: slow theta waves (4-6Hz)
Metabolism: 50-60%, pain and voice stimulation responsive , active: brainstem, thalamus, primary auditory
cortex, non-active: high-level associative cortex
-Minimally conscious state
MCS is a pre-conscious state above the vegetative state.
Patients are unable to communicate functionally, they can sometimes respond adequately to verbal commands
and make understandable verbalizations, emotional behaviors, such as smiles, laughter or tears are observed.
Functionality: partly verbalization, recognition of language, autonomous functions present, eye tracking,
emotional behavior
EEG/MEG: low amplitude delta (0.5-3 Hz) theta (4-8 Hz) waves
Metabolism: 60-80%, active: cortex auditory network, medial prefrontal cortex (mPFC), posterior cingulate
cortex, occipital cortex, responsive pain stimulus: sensory cortex, posterior parietal cortex, pre-motor cortex,
superior temporal cortex
Global metabolism in various states [60] ch2
Ordinary states of consciousness [60] ch3
-Normal awareness
Stillness: SMR waves (12-15 Hz), brain rate fb=6.8
Alertness: Beta2 (15-20Hz), brain rate fb=7.8
Anxiety: Beta3 (22-25 Hz), brain rate fb=8.56
Peak performance: Gamma (>35Hz), brain rate fb=9.4
- Daydreaming
Daydreaming is a subjective experience that usually occurs under conditions of low external stimulation, where
unsolicited thoughts intrude during mental and/or physical activities. There are 4 types, differentiated by
functionality and EEG (Lehmann et al. 1995).
Type1
Functionality: non-recalled visual images without emotions
EEG/MEG: 2-6Hz, 13-15Hz
Type2
Functionality: recalled visual images without emotions
EEG/MEG: inversed (=low amplitude) 10-13 Hz, 15-25 Hz
Type3
Functionality: goal-oriented concatenated thoughts about present and future events without emotions
EEG/MEG: 10-11 Hz, 19-30 Hz
19
Type4
Functionality: goal-oriented concatenated thoughts about present and future events with emotions
EEG/MEG: 10-13 Hz, 15-25 Hz
-Drowsiness
During drowsiness, the phenomena observed include amnesic and/or automatic behavior, and brief periods of
sleep onset (micro-sleep).
Functionality: amnesic and automatic behavior, reduced latency/amplitude of event related potentials (P300)
EEG/MEG: theta waves (4-8Hz) , brain rate fb=4.0
-REM sleep
Rapid eye movement sleep (REM sleep) is a phase of sleep in mammals and birds (in humans also
S1 sleep ), characterized by rapid movement of the eyes and by low muscle tone, accompanied by dreams.
During REM sleep, the brain processes and stores information from the previous day.
Functionality: rapid eye movement, hallucinatory mental content (dream), reduced: abstract thinking, decision
making, consistent logic; no focused attention, no semantic and episodic memory, deactivated dorsolateral
prefrontal lobe
EEG/MEG: PGO waves in brainstem, theta (3-10Hz) waves in hippocampus, gamma (40-60 Hz) in cortex,
brain rate fb=5.34
-NREM sleep
Deep sleep in stages S2, S3, S4 is called non-rapid eye movement sleep (NREM), as there is little or no eye
movement, also there is no muscle paralysis (normal muscle tone).
Functionality: no eye movement, normal muscle tone, mental content thought-like non-visual, parasympathetic
NS dominates, S2: sleep-talking and sleep-walking possible (mostly in children)
EEG/MEG: S2-sleep: delta waves (0.5-3 Hz), spindles (0.5s 11-15 Hz) , active: sensorimotor cortex, superior
frontal gyrus; brain rate fb=4.18
S3-sleep: delta waves (0.5-3 Hz), brain rate fb=2.72
S4-sleep: delta waves (0.5-3 Hz), brain rate fb=2.45
-Relaxation
Meditative practices are commonly used techniques to induce relaxation.
Functionality: reduced vivid imagery, slowed-down respiration, reduced brain metabolism
EEG/MEG: beta2-waves (18.5–21.0 Hz) , brain rate fb=5.8
20
4 Maps, emotions and memory
Biological value and homeostasis [4] ch2
Biological value is a measure of well-being of an animal/human. It evaluates the deviation of life-parameters
from the optimal range (physical parameters like temperature, pressure, concentrations of body liquids,
hormone levels, concentration of metabolic reactants) and achievement of goals (sexual reproduction, food
supply, security).
An animal/human is rewarded or punished accordingly by special hormones and feels pleasure or pain.
Preconditions for measuring biological value is interoception (cognition of inner state parameters via hormones
and neurotransmitters) and exteroception (sensory cognition).
Measuring homeostasis functions works by
-cognition of current inner state
-(pre-defined) knowledge of desirable state
-comparison of the two
Unconscious regulation of homeostasis works via automatic chemical feedback loops .
Conscious regulation of homeostasis works via goal mechanism guided by instincts.
Regulation of social homeostasis is achieved on a basic levels via emotions, on an advanced level via moral,
behavioral rules, and justice.
Maps as representations [4] ch3
Maps are representations of internal states and external objects and events made by the brain.
Raw sensory data are stored in dedicated regions of the mid-brain: visual images in superior colliculus,
auditory images in inferior colliculus, whole-body pain/satiety information in parabrachial nucleus, whole-
body chemo-sensor and tactile information in nucleus tractus, corresponding primary feelings are processed in
periaqueductal gray (PAG). The PAG also controls some primary behavior patterns (defensive, copulatory,
maternal).
The deep superior colliculus combines visual, auditory and whole-body sensory information into a primitive
map, which can guide automatic half-conscious behavior via PAG .
High-level processing takes place in the cortex: visual resp. auditory images are processed in the visual cortex
resp. auditory cortex, primary feelings are processed in the right and left insular cortex and somatosensory
cortex.
Recalling objects of a map proceeds via synchronously firing brain regions (40Hz range) , and uses recursive
processing.
Varieties of maps (images) and their source objects. When maps are experienced, they become images. A
normal mind includes images of all three varieties described above. Images of an organism’s internal state
constitute primordial feelings. Images of other aspects of the organism combined with those of the internal state
constitute specific body feelings. Feelings of emotions are variations on complex body feelings caused by and
referred to a specific object. Images of the external world are normally accompanied by images of varieties of I
and II. Feelings are a variety of image, made special by their unique relation to the body. Feelings are
spontaneously felt images. All other images are felt because they are accompanied by the particular images we
call feelings.
Representation of external and internal world [JH]
21
Based on the achievements of AI chatbot Lamda from Google, which reaches the mental ability of an 8-year-
old child [81], we can postulate a similar structure and learning algorithm in human children.
Lamda (automatic language model for dialog applications) is a parallel-input transformer recursive neural net
(NN), pre-trained on a selected qualified text data base with 2.81T Sentence Piece tokens, then fine-tuned
trained on a very qualified text data base, using evaluation based on the objectives: quality, safety,
groundedness [82].
Lamda communicates in (spoken or texted) natural-language in real time.
Lamda has, according to software-developer Lemoine, the intelligence of an 8-year-old child [81].
Lamda can be extended by picture-processing machine in addition to the text-processing machine, which
enables it to form a symbolic-visual representation of the world similar to humans.
Lamda can be equipped with an additional fast-retrieving CPU-based knowledge system, which gives it
practically the complete knowledge of humanity.
Lamda can be made faster by an assessed factor 1000-10000 by implementing it on NN-hardware.
Schrimpf et al. [90] analyzed 43 different neural net models to see how well they predicted measurements of
human neural activity measured by fMRI and electrocorticography. Transformer recursive NN’s like Lamda
performed best among them, predicting almost all of the patterns found in the imaging.
This supports the view that humans form their representation of the external and internal (psychic) world by
supervised and unsupervised recursive-NN learning by verbal dialogue and sensorial experience in analogy to
Lamda learning from verbal dialogue and verbal input.
Human representation in comparison with Lamda is twofold:
-basic sensorial notions of external world, e.g. fly=image + sound + time-sequential action (=behavior) of a fly
-verbal-based notions of external world from dialogues/texts, e.g. flies move by flying or walking, flies feed on
human food and on waste
- basic intuitive notions of internal world, e.g. feeling of fear
- verbal-based notions of internal world, e.g. from others speaking about their fear, or reading texts about the
emotion of fear and the corresponding human behavior
Learning in humans and in neural networks [JH]
-Learning modes
Modern neural networks (NN’s) like Lamda [81] are transformer neural networks (TNN). They learn in
supervised mode or in unsupervised mode [83].
In supervised mode, there is a correct output corresponding to a given input, and the NN adapts its weights to
the correct output (e.g. via backward propagation). The corresponding model is the Feedforward NN with
backpropagation [73]. This is the basic cognition mechanism in higher animals, in humans it probably the
main sensorial processing mechanism in visual, auditory and motoric cortex.
In unsupervised mode, the learning and responding mechanism is probably very similar to the corresponding
functionality in TNN’s [83] [73]. Also, the structure and storage of verbal-symbolic knowledge must be very
similar to the corresponding structure in TNN’s [73].
-General speech processing in NN’s as a model for the brain [73]
The general scheme of supervised and unsupervised learning and responding based on NN functionality is
presented in the following schematic [90].
22
Comparing NN models of language processing to human language processing [90]
-Speech processing in TNN’s as a model for the brain
The functionality of learning and translation in TNN’s is presented in the following schematic [73] ch13.
Mono-lingual speech generation responding to a speech input runs in the same way, but with
Dictionary2=Dictionary1 .
The basic new feature here is the attention vector Q, which is the similarity weight vector of a word xi to a set
of symbols qj . An additional parameter here is the relation (K,V) , which represents a relation within the
processed phrase (e.g. k=subject, v=verb). For instance, in the phrase “cat sits on the carpet”, the relation for
“cat” is ( k=subject(cat), v=verb(sits)) , and the symbol with the highest weight would be “animal subject”.
The dictionary contains the learned weight vectors q(k,v), i.e. weights of a word vs. symbols in context.
The semantic meaning of a word is represented by the set of its attention vectors in all possible contexts.
In the learning phase, the encoder generates the attention vectors q(k,v), and stores them in the dictionary.
23
In response generation, the encoder generates attention output from the input and the dictionary, and the
decoder generates the response output word by word, using the encoder output, the dictionary, and the up-to-
now generated response phrase. During response generation, the decoder generates attention output for every
generated response word, and modifies the dictionary accordingly. That means that TNN learns also in
response generation.
Four translation, Dictionary1 is learned in source language, and Dictionary2 is learned independently in
target language. During translation process, the decoder generates the target output using Dictionary2 as
response to the source input of the encoder using Dictionary1.
-Two-pathway object and sequence learning
In the human and primate brain, visual processing has two pathways: ventral visual stream, which is responsible
for recognizing objects and faces, and the dorsal visual stream, which processes movement [83].
Experiments with NN’s with single image recognition and next image prediction in a sequence showed that
splitting a single NN in two NN’s results in two specialized pathways: one for single image recognition, and
one for sequence prediction [83].
Filtering and transport of information [91]
Sensorial information enters first the rhinal cortex (rC), there it is meaning-analyzed for ca. 1s and associated
with verbal, visual or other sensorial information in memory. If recognized as relevant, it is copied into short-
time memory in the hippocampus y synchronous gamma-oscillations. The short-time memory can store
information for up to several hours.
Information evaluated as highly relevant is copied from the hippocampus into the long-time memory in the
cortex. The long-time memory has two areas: declarative memory containing episodes, facts, and numbers, and
procedural memory which encompasses automatized behavioral patterns like walking or cycling.
In the hippocampus remain only clues, which can be used to recall the complete contents from the cortex.
The long-time memory works with strengthening and weakening of synapse contacts by protein bridges.
The fixing and erasing of memories in the long-time memory happens usually at night, during REM phase.
Strengthening of synapse contacts happens depends on the frequency of activation.
Wave patterns in memory storing, consolidation and recalling
There are two wave forms in the hippocampus, which are associated with memory storage and memory
consolidation: long wave ripples arising during learning (e.g. in a maze), and sharp wave ripples related to
memory consolidation [124].
The sharp wave ripples have been associated with memory consolidation of newly gained knowledge for long-
term storage, but also with recalling long-term memories [125].
Sharp wave ripples in electrical activity observed in the hippocampus can take short and long forms [124]
Body-to-mind mapping [4] ch4
The sensory information from the body: skeleto-muscular and visceral-muscular sensors, smell and taste
mucosae, the tactile elements of the skin, the ears, the eyes and the chemo-sensory information, are pre-
processed in mid-brain and stored in the cortex. The cortex generates conscious (secondary) feelings (sensory
body images), but it can also simulate body states.
The brain-to-body signals are motoric (skeleto-muscular and visceral-muscular) and chemical (via hormones,
like cortisol, testosterone, and estrogen).
24
Schematics of key brain-stem nuclei involved in life regulation (homeostasis). Three brain-stem levels are
marked in descending order (midbrain, pons, and medulla); the hypothalamus (which is a functional component
of the brain stem even if it is, anatomically, a part of the diencephalon) is also included. Signaling to and from
the body proper and to and from the cerebral cortex is indicated by vertical arrows. Only the basic
interconnections are depicted, and only the main nuclei involved in homeostasis are included. The classic
reticular nuclei are not included, nor are the monoaminergic and cholinergic nuclei.
The involved structures such as the NTS (nucleus tractus solitarius) and PBN (parabrachial nucleus) do transmit
signals, from body to brain but not passively. These nuclei, whose topographic organization is a precursor of
that of the cerebral cortex, respond to body signals, thereby regulating metabolism and guarding the integrity of
body tissues. Moreover, their rich, recursive interactions (signified by mutual arrows) suggest that in the
process of regulating life, new patterns of signals can be created. The PAG (periaqueductal gray), a generator of
complex chemical and motor responses aimed at the body (such as the responses involved in reacting to pain
and in the emotions), is also recursively connected to the PBN and the NTS. The PAG is a pivotal link in the
body-to-brain resonant loop. The deep superior colliculus (SC) combines visual, auditory and whole-body
sensory information, which can guide automatic half-conscious behavior via PAG . The area postrema (AP), is
a paired structure in the medulla oblongata of the brainstem, is a circumventricular organ having permeable
capillaries and sensory neurons that enable its dual role to detect circulating chemical messengers in the blood
and transduce them into neural signals and networks.
In the process of regulating life the networks formed by these nuclei also give rise to composite neural states.
The word feelings describes the mental aspect of those states [4] ch4.
The brain simulates body states via the so-called as-if-body-loop, which is realized via mirror-neurons. In this
way, the brain performs the simulation in the brain’s body maps, of a body state that is not actually taking place
in the organism.
Emotions and feelings [4] ch5
Emotions are complex, largely automated programs of actions. The actions are complemented by a cognitive
program that includes certain ideas and modes of cognition, but the world of emotions is largely one of actions
carried out in our bodies, from facial expressions and postures to changes in viscera and internal milieu.
Feelings of emotion, are composite perceptions of what happens in our body and mind, the world of feelings is
one of perceptions executed in brain maps.
Emotions are triggered by images of objects or events that are actually happening.
There are three ways of generating an emotion
-perception of a change of body state caused by the emotion
25
-perception of the corresponding as-if-body-loop, i.e. the simulation of the emotion
-feeling an altered transmission of body signals to the brain, e.g. by alcohol
The time interval between the stimulus of an emotion and its feeling is about 0.5s, according to magneto-
encephalography measurement by Rudrauf.
The primary, evolutionary old emotions are fear, anger, desire, sadness, pain, pleasure, love, disgust, joy.
The secondary socially motivated emotions are compassion, embarrassment, shame, guilt, contempt, jealousy,
envy, pride, admiration.
Memory architecture [4] ch6
Mental maps are stored as configuration (synaptic weights) of brain neural networks (map memory), and
recalled by dispositions (action recipes) .
Memory of an object/event is a composite of sensory, motoric and mental activities:
-audio-visual image (shape. movement, color, sound)
-sensorimotor pattern (e.g. eye movement)
-tactile pattern
-previous patterns pertinent to the object
-pattern of triggering emotions and feelings
The evolutionary predecessor of this map memory is the disposition memory. A disposition is an action recipe
invoked by a direct external effect (like a direct hit on the body causing the action “move in opposite direction
for a fixed time”). The disposition memory in humans and higher animals is used to manage basic life
functions: the endocrine system, reward/punishment, triggering and execution of emotions.
Memories are distributed, a damage to the anterior brain regions compromises the specificity of a memory, but
not of its general contents, e.g. a patient with anterior brain damage may describe a birthday party in detail, and
yet he may forget that it was his birthday party.
There is a degree of complexity corresponding to a memory: unique-personal entities and events are the most
complex, then come the less complex unique-nonpersonal, then nonunique-nonpersonal ones.
Furthermore, there is a distinction between factual (static) and procedural (in time) memories.
In human/animal brain, dispositions are stored in neural networks to recall map memories.
Memory is organized in many-level micronodes convergence-divergence-zones and convergence-divergence
regions functioning as central hubs.
A convergence-divergence zone (CDZ) is a microscopic ensemble of neurons which form a feedforward neural
network, CDZ’s recreate approximately synchronous perceptions with a certain time-window, they are, roughly
speaking, bundled network-representations of important concepts, e.g. ‘bear’ (and other similar concepts,
like ‘elephant’).
CDRegions are macroscopic macronodes, which function as network-hubs. CDZ’s number in many thousands,
CDR’s number in the dozens. CDR’s are agents, which compete for attention focus and cognitive focus.
26
Using the CD architecture to recall memories prompted by a specific visual stimulus. In panels a and b, a
certain incoming visual stimulus (selective set of small filled-in boxes) prompts forward activity in CDZs of
levels 1 and 2 (bold arrows and filled-in boxes). In panel c, forward activity activates specific CDRs, and in
panel d, retroactivation from CDRs prompts activity in early somatosensory, auditory, motor, and other visual
cortices (bold arrows, filled-in boxes). Retroactivation generates displays in “image space” as well as
movement (selective set of small filled-in boxes) [4] ch6.
The image space (mapped) and the dispositional space (nonmapped) in the cerebral cortex. The image space is
depicted in the shaded areas of the four A panels, along with the primary motor cortex. The dispositional space
is depicted in the four B panels, again marked by shading [4] ch6.
Modal conceptual representation
According to new research [43], aspects of complex concepts are stored in corresponding brain regions and
interconnected, and the whole concept is invoked by activation of an aspect (e.g. the concept “telephone” by its
sound in the auditory cortex. An aspect can be sensory, motoric or purely abstract (amodal).
The anterior temporal lobe (ATL) is the main hub for amodal concept processing.
There is evidence for embodied abstraction view , i.e. multimodal conceptual processing in left posterior
inferior parietal lobe (pIPL), posterior middle temporal gyrus (pMTG), and medial prefrontal cortex (mPFC).
Memory functionality [61],[62],[63],[64] [73]
27
Long-Term
Memory Subtype Description Example
Declarative
Conscious memory of facts and events
Semantic Factual information the American independence war
started in 1776
Episodic Individual experiences
Sunday af
ternoon with parents on
the beach
Procedural
-
recognition skills
Pattern memory
Pattern recognition and completion completing a letter which is half-
visible
Perceptual
memory
Differentiating sensorial experience by
learning from stimuli distinguishing musical tones
Cate
gorical
memory Automatic grouping in categories kinds of fruits
Emotional
memory
Forming event-conditioned emotional
reactions being afraid of dogs
Procedural
memory Learning motoric-sensorial skills learning swimming
Memory can be subdivided into declarative (conscious) and non-declarative/procedural (sub-conscious)
memory.
Declarative memory is semantic (factual) or episodic (individual events).
The autobiographical self is based on semantic and episodic memory.
Non-declarative memory consists of automatically learned motoric, sensorial, pattern-recognition skills. It
consists of pattern- , perceptual, categorical, emotional, and procedural memory.
Semantic memory
Semantic memory is the general world knowledge accumulated throughout life. This general knowledge (facts,
ideas, meaning, physical concepts, fictional concepts from literature) is acquired from experience and
dependent on culture.
Semantic memory is located in the para-hippocampal cortex, i.e. entorhinal perirhinal cortex [61],[64].
During semantic retrieval, two regions in the right middle frontal gyrus and the right inferior temporal gyrus
also show increased activity.
Episodic memory
Episodic memory is the memory of individual events (times, location geography, associated emotions)
[61],[64].
Episodic memory is located in the hippocampus, during episode retrieval and forming, also the right prefrontal
cortex is activated.
Memory storage in structural network patterns
As was shown by Quian-Quiroga 2013 [119] , symbols in semantic memory are stored in single neurons
(famous “Luke-Skywalker-neuron”). The symbol constitutes the highest level of a network of lower-level
symbols , for the person “Luke-Walker” lower-level concepts are: images of Luke Skywalker, the sound and
textual representation of the name, scenes from Star-Wars movies, where he is involved. Also, there are
connections to related symbols like Darth Vader, etc.
Furthermore , Quian-Quiroga et al. showed in 2015 [120] that the same is true for events in episodic memory.
This means that concepts in semantic memory and events in episodic memory are both stored as hierarchical
networks. Obviously, concepts in brain memory are defined by the structural network patterns , and not by a
word in the mother tongue. The structural pattern characterizes a symbol, e.g. “knife”, and this pattern should
be similar in similar cultural environment , and largely language-independent . This explains, why it is
relatively easy to create a one-to-one dictionary between different languages, apart from nuances, as long as
their speakers share a similar cultural environment. The same is true for trans-cultural concepts like
“distance”.
28
This concept is supported by the knowledge structure in transformer neural networks (TNN’s), see above.
In TNN’s, symbols are language-independent and characterized by their network structure (weighting of
neighbors, first-grade neighbors, second-grade neighbrs,... ), whereas words of a language are characterized by
their attention vectors relative to symbols [73].
In general, words of a language describing a concept are only a label of the corresponding structural network
pattern, it is this structural pattern that carries the semantic meaning of the concept.
This concept is the basis for machine translation (MT) using neural machine models, e.g. for unrelated
languages like English and Chinese [121].
Traditional MT uses rule-based systems with grammar translation rules and dictionary-like word-mapping and
short-phrase-mapping.
Neural machine translation (NMT) based on (especially GPT) uses long continuous sequences and trains the
model using GPT’s attention-vector technique .
When training two neural networks (NN) on English and Chinese, each NN forms a word network: a map of
all the associations of words in the respective language, built by placing similar words near one another and
unrelated words farther apart, where the similarity is measured as attention-vectors [122]. The mapping
correspondence between words in the two languages is found by matching their structural pattern [123].
29
5 The dual brain
The Parkins-Adolphs-Kuo-Squire-Yordanova (PAKSY) model of procedural vs declarative representation and
processing in the brain, which is introduced in [65] P2-1.3] describes the information processing and
representation in the brain in terms of the duality of procedural-declarative or intuitive-analytic processing.
The distinction between procedural and declarative process is the same as the distinction between automatic
skills and controlled processing (Adolphs 2009 [66]), between implicit and explicit knowledge (Yordanova
2008 [69]), and between analytic reasoning (slow, controlled, and effortful) and intuition (fast, emotional,
effortless, and creative) (Kuo et.al. 2009 [67]; Rhodes 2012 [114]).
Lateralization of functionality within the brain [65] P3-8
Information and processing associated with left cerebral hemisphere functionality:
The left cerebral hemisphere plays a major role in processing of speech, verbal information, language. It is
concerned with feature extraction, abstract concepts, and logical-analytical processing, it works sequentially.
The left hemisphere is involved in the process of active vigilant attention, and it controls focal attention.
It has been suggested that only the left hemisphere has the underlying brain organization that allows for high-
level consciousness.
From this one can conclude: the left cerebral hemisphere is responsible for the analytic/declarative (abstract)
information representation and processing, and its function can be identified with Damasio’s high-level
consciousness.
Information and processing associated with right cerebral hemisphere functionality:
The common types of information that is processed by the right cerebral hemisphere is spatial, the other is the
perception and production of emotional information.
One quality of the right cerebral hemisphere is holistic/global information processing. It processes information
quickly, responding more quickly than the left hemisphere.
Other features of the right cerebral hemisphere are parallel processing, analog perceptual representation, and
implicit information processing (i.e. through a relation network, as opposed to explicit through an expression or
formula).
It has been pointed out that the infant relies primarily on procedural memory systems during the first 2-3 years
of life (Schore 2000 [115]). Accordingly, we can assume that from the evolutionary point of view, the
procedural functionality is an earlier development stage than the declarative functionality.
From this one can conclude: the right cerebral hemisphere is responsible for the intuitive/procedural
(naturomorph) information representation and processing, and its function can be identified with Damasio’s
autobiographical self.
Functionality of the left and right cerebral hemisphere
Left Brain Hemisphere
Right Brain Hemisphere
Functions
Speech and language, Mathematical
computations, Rational reasoning,
Logical analysis
Intuition, Spatial awareness, Music, Creativity,
Facial recognition, Art, Rhythm
Personality Logical, Attention to details, Analytical Artistic, Creative, Open-minded
Traits
Rational decision
-
making, Linear
thinking, Reality-oriented
Random thoughts, Non
-
verbal processing, Holistic
thinking, Fantasy-oriented
Thought process
Verbal and sequential Non-verbal random thoughts
Problem-solving
ability Solve problems in the most logical way Solve problems in the most intuitive way
Overall thinking
Detail
-
oriented
Holistic approach
Strengths
Language verbal and written,
Mathematics and analytics , Sequencing,
Reading, Writing, Spelling
Arts, Music, Coordination, Multi-dimensional
thinking, Remembering a place, faces, or events
Difficulties Visualization, Abstract thinking Organizing a huge body of information, Difficulty
30
in following a sequence, in remembering names
Parts of the body
being controlled
It controls the right side of the body. It controls the left side of the body.
Effects on the
body when
damaged
Not able to understand both spoken and
written words, Can’t see or perceive
things on the right side of the body, Slow
movements
Visual perception is impaired, Can’t see or
perceive things on the left side of the body, Short
attention span, Poor decision-making, Slow
learning process, Impulsiveness
Communication between the abstract and naturomorphic learning [65] P2-2.4
Basic features of abstract functionality
Basic abstract logical functionality: simple arithmetic, simple logical conclusion, simple grammatical schemes
(like subject-predicate (SP), subject-verb-object (SVO))
Basic abstract semantic functionality: using/learning concepts, using/learning metaphors
Concept: is a set of related objects, sensorial images, motoric actions, emotions, events
Metaphor: is a description of a concept in terms of another concept, which is better understood, and taken from
physical, social, or psychic experience [74]. E.g. metaphor “TIME IS MONEY” has the realization “You’re
wasting my time”.
Basic features of naturomorph functionality
Basic naturomorph functionality: pattern recognition (e.g. face recognition), automatic structure finding, event-
prediction from time-series of events, evaluation of similarity, analogy reasoning, patter-reconstruction (filling-
in of missing features).
All these functionalities are characteristic for different types of neural networks [73].
Brain evolution [65]P3-10
According to MacLean there are three evolutionary layers to the mammalian brain, which are the described as
the reptilian brain, the paleo-mammalian brain, and the neo-mammalian brain .
reptilian brain: brainstem and the basal ganglia
paleo-mammalian brain: limbic system
neo-mammalian brain: neo-cortex
As the limbic system is related to Damasio’s core-self, and the brainstem is related to Damasio’s protoself, we
can conclude, that from evolutionary point of view, the development goes from protoself to core-self, and then
to the cortex-based autobiographical self and the high-level consciousness.
31
6 Consciousness
Wakefulness and consciousness
Wakefulness is a state of a human, in which sensory cognition and muscular motoric (including purposeful
movement) is working, as opposed to sleep [4] ch7.
During REM-sleep, there is very limited sensory cognition and muscular motoric, although the brain is partly
active producing images (dreaming in REM-sleep) (low PMC activity).
During non-conscious wakefulness sensory cognition and automatic movement is possible (during epileptic
absences automatic movement is possible, e.g. fetching an empty cup and trying to drink), but there are no
visible emotions, no (verbal or non-verbal) communication, no planning, no sense of personal identity.
Apparently, during non-conscious wakefulness automatic behavior is controlled by periaqueductal gray (PAG)
in tegmentum (protoself) and subconscious emotions are controlled by deep superior colliculus, with complete
shutdown of the cortex. As a consequence: conscious feeling requires a functional cortex (insular cortex,
somatosensory cortex).
During somnambulant episodes the brain is in slow-wave sleep (non-REM), PMC is not active, but there is
activity in motor, sensori-motor and cingulate cortex [46]. Furthermore, motoric and sensoric activity is much
more advanced than in PAG-controlled non-conscious wakefulness and (unconscious) speech occurs. That
means, this happens under non-conscious core-self control.
Consciousness can be rated by its scope:
-minimal: drinking a cup of coffee, thinking of nothing
-medium: daydreaming ( watching an internal flow of images), or recalling personal episodes and impressions
of persons from the past
-high: in a dialogue with a friend or relative
This differentiation can be used to locate the different levels of consciousness, specified above:
-subconscious: protoself
-momentary-conscious: core-self
-medium-conscious: autobiographical self
-high-conscious: high-level consciousness
The function of the “consciousness manager” (which will we passed between main cortex regions) is basically:
-select “valuable” images out of the huge flow by evoking varying degrees of emotion as marker (“gut feeling”)
-organize them into a coherent narrative and bring this into focus in the scarce “focus display space”
-use evaluation criteria: anticipation of situations, preview of possible outcomes, navigation of possible future,
invention of management solutions
Ingredients of consciousness in neural-network model of a conscious machine [37]
1 attention
2 attention schema
3 library of patterns
4 talking search engine
Sensorial-attentional and cognitive focus workspace
Metzinger [75] distinguishes two most important types of mental action:
Attentional agency (AA), the ability to control one’s focus of sensorial attention.
Cognitive agency (CA), the ability to control goal/task-related, deliberate thought.
AA and CA are functional properties that are acquired in childhood, and can be lost due to brain lesions or
degradation. Their incidence and variance can be scientifically investigated and measured e.g. by brain
imaging.
In the human brain, we observe two main networks, which can be considered as two ‘focus competition
workspaces’ according to the Global-Workspace-Theory (GWT) by Baars [7]: the switched Triple-Network
(TN= DMN-VAS-CEN) and the DAS network, with distinct functionality and location (see below). Following
Metzinger, we can identify AA with the control instance of the DAS network, and identify CA with the
control instance of the TN-Network.
32
Global networks in primate brain
Global brain networks and their functions [39]
Central Executive Network (CEN)
The central executive network (CEN) (alias CCN=cognitive control network, FPN=frontoparietal network) is
a global brain network [78].
The CEN is primarily composed of the rostral lateral cortex , middle frontal gyrus and the anterior inferior
parietal lobule. Secondary regions include the middle cingulate gyrus and the dorsal precuneus, posterior
inferior temporal cortex, dorsomedial thalamus and the caudate nucleus.
The CEN is involved in executive function and goal-oriented tasks.
CEN network [78]
The CEN network has 6 CDR’s (associative areas) [78].
The CEN network is one of three networks in the so-called triple-network (TN), along with the VAS
(salience) network and the default mode network (DMN), where VAS switches focus between DMN and CEN.
Default mode network DMN
In neuroscience, the default mode network (DMN), is a large-scale brain network primarily composed of the
orbital frontal cortex , the lateral temporal cortex (LTC), the medial prefrontal cortex (mPFC), posteromedial
cortex (PMC) and angular gyrus (AG) [38], [39].
It is active when a person is not focused on the outside world and the brain is busy in self-reference:
autobiographical memory, thinking about others, thinking about oneself, remembering the past, anticipation of
future, daydreaming, moral judgment.
33
DMN is also highly active in meditation [41] and when enjoying art [40].
DMN in humans shows low activity in infants and evolves to full functionality in adults [38].
DMN has been shown to be negatively correlated with attention networks in the brain.
In monkeys there is a similar network of regions to human DMN, PMC is also a key hub in monkeys, but the
mPFC is smaller and less well connected [38].
The four main DMN area’s in the human brain (mPFC, PMC, 2 AG) [38]
Human DMN left [38]: . significant clusters include 1 orbital frontal cortex; 2/3 medial prefrontal
cortex/anterior gyrus;4 lateral temporal cortex; 5 inferior parietal lobe; 6 posterior medial (PMC)/retrosplenial
cortex; 7 hippocampus/para-hippocampal
Monkey DMN right [38]: 2/3 dorsal medial prefrontal cortex; 4/5 lateral temporoparietal cortex (including area
7a and superior temporal gyrus); 6 posterior medial(PMC)/precuneus cortex; and 7 posterior parahippocampal
cortex
The DMN network has 4 CDR’s (associative areas) [38].
Attention networks
Attention networks are sensor-oriented networks (visual, auditory, sensorimotor), executive control network,
salience network (jumping attention), and the attention-controlling dorsal and ventral networks as supramodal
attention systems [32].
The dorsal attention system (DAS) controls the top-down biases of sensory areas [42].
The salience or ventral attention system (VAS) is typically recruited by infrequent or unexpected events that
are behaviorally relevant, and for sensory filtering [42].
The VAS network facilitates switching between the CEN and DMN [77].
34
Functional connectivity maps for dorsal seed regions (IPS/FEF, blue) and ventral seed regions (TPJ/VFC, red)
during fMRI resting state [42]
CDR’s of DAS and VAS-network [76]
According to Man [76], we can distinguish in the primate brain 4 CDR’s (secondary association areas) in the
DAS-network and 3 CDR’s in the VAS-network.
Interaction between the VAS (salience) and DAS networks enables dynamic control of attention in relation to
top-down goals and bottom-up sensory stimulation [42].
35
The VAS network mediates switching between the DMN and CEN [77]
Language network
According to recent research [79], language comprehension (phonetic and textual) and language production are
controlled by two dorsal and two ventral pathways. It seems that this language network is an additional eightth
global network competing for focus.
The dorsal pathway between temporal cortex (pMTG/STG) and premotor cortex (dPMC) supports speech
repetition.
The dorsal pathway between temporal cortex (pSTG) and Broca's area (BA44) supports complex syntax.
The ventral fiber tract UF connecting inferior FC with the anterior TC supports basic syntactic processes
The ventral fiber tract IFOF connecting BA45 in iFG and BA37 in MTG supports semantic processes
Concise characteristics of consciousness
Taking all this into account:
According to Metzinger and GWTheory, consciousness has two modes:
cognitive-focused mode in the triple network (TN), consisting of DMN-VAS-CEN
sensorial-attentional-focused mode in the dorsal attention networks (DAS)
Consciousness functions as two global workspaces with CDR’s as agents competing for sensorial-attentional
and cognitive focus, under the coordination of PMC (TN cognitive-focused mode) and DAS attentional agent
(sensorial-attentional focused mode).
We can plausibly describe consciousness and wakefulness:
consciousness is a state of high-level awareness of self and surroundings, including previewing and planning
of future
wakefulness is a state of low-level awareness of surroundings and automatic movement control.
consciousness has two modes: cognitive-focused (TN= DMN-VAS-CEN) and sensorial-attentional-focused
mode (DAS)
consciousness functions as two global workspaces with CDR’s as agents competing for focus, under the
coordination of PMC (TN cognitive agent) and DAS sensorial-attentional agent.
Three genetic–environmental networks for human personality
I.Zvir et al. [117] have analyzed complex human behavior statistically and found three major systems of
learning and memory. They characterized these as:
(1) unregulated temperament profiles (i.e., associatively conditioned habits and emotional reactivity),
(2) organized character profiles (i.e., intentional self-control of emotional conflicts and goals),
(3) creative character profiles (i.e., self-aware appraisal of values and theories).
In a subsequent paper [118] they uncovered a genetic basis for these three temperament profiles.
36
There is a largely disjoint sets of genes regulating the three distinct learning processes in different clusters of
people with unhealthy temperament profiles (network 1 with associative conditioning), organized character
profiles (network 2 with intentionality), and creative character profiles (network 3 with self-awareness).
Furthermore, they carried out a genetic comparative analysis of these genes from modern humans with
Neanderthals and chimpanzees, and found that only the first network was common to all three primate species,
but the two latter were found only in modern human individuals.
Evolutionary stages of consciousness
Consciousness is an evolving biological phenomenon, and as such it can be traced back in biological evolution.
The following table presents an evolutionary picture Damasio's concept of multi-layered consciousness [31]
plus Zvir’s genetic–environmental networks.
Name of layer
Chief characteristics of Layer
Typical life form
Top-level consciousness
p
ossession of
c
omplex
l
anguage
skills, symbolic thinking, strong
sense of both past and future, s
trong
sense of autobiographical self and
memory, high-level c
onsciousness,
conscience, substantial artistic and
technical creativity (Zvir profile 2
&3)
modern humans
Higher extended consciousness
possession of basic language skills,
strong sense of both past and f
uture,
strong sense of autobiographical
self and memory, advanced-level
consciousness (Zvir profile 1),
conscience,
Neanderthals
Extended consciousness
possession of rudimentary l
anguage
skills, limited sense of
autobiographical self and memory
(passes mirror test), limited s
ense of
both past and future
chimpanzees, dolphins
Core consciousness
sense of core self, conventional l
ong
term and short term memory, s
trong
sense of being in the present, no
ability language
higher mammals
Consciousness of self and external
object relationships
detects changes in self and images
of external objects, rudimentary
memory
fish, reptiles, primitive mammals
Consciousness of proto-self
w
akefulness,
i
mage
m
aking
a
bility,
minimal attention, detection of
object significance
simple animals
Simulation model of mind behavior [35]
The above consciousness model can be formulated with the brain as a state-machine, with constant transition
time and fixed transition rules.
Three important concepts in this theory are 'emotion', 'feeling' and 'feeling a feeling' (in core consciousness).
The two mechanisms by which a feeling can be achieved as distinguished by Damasio have been incorporated
in the model:
(1) via the body loop, the internal emotional state leads to a changed state of the body, which is then
represented in sensory body maps in protoself.
(2) via the as if body loop, the state of the body is not changed. Instead, a changed representation of the body is
created directly in sensory body maps and produces the same feeling as with a genuine sensory stimulus.
37
The “feeling a feeling” is a second-order representation of the mind state S, here S=feeling the music, sr(music)
is the sensory map in proroself. The “feeling a feeling” of music means sensing S as sensory input (see
diagram below).
The transition rules for the above example are
The simulation with special software [36], which generates simulation traces like the one for the as if body loop
38
The traces are a verification tool, with which one can check, whether the model based on the above transition
rules yields a behavior expected in real mind functionality. This was indeed the case for the simulated
consciousness model.
Brain functionality of consciousness [102]
[102] Effective connectivity changes with anaesthetic-induced unconsciousness in the human
lateral cerebello-thalamocortical network (White and Alkire 2003 [116]). Part (a) of the figure shows
the network nodes, with their Talariach coordinates, and their modelled interactions. Structural
equation modelling of this limited corticothalamic network (b) reveals that effective connectivity
dramatically changes within this network, especially involving the thalamocortical and corticocortical
interactions, depending on the presence or absence of consciousness.
Central consciousness: (4)Thalamus(Nucleus ventralis anterior (VA) Nucleus ventralis lateralis (VL)) ,
(5) Cortex supplementary motor area SMA, (6) Cortex primary motor M1
39
7 Language [JH]
Language evolved in humans as a powerful method of communication to promote cooperation and passing of
information, in parallel and in mutual influence with the evolution of brain functionality [55].
Based on DNA-genetic (FOXP2 gene), paleontological, and primate-learning research we can assume, that
homo erectus probably possessed limited vocal and mental language capability on the level of stage2 (modern
human child aged 1.5 years). Furthermore, Sapiens and Neanderthals have basically the same FOXP2 gene,
apart from an intron insertion, which happened very early in Sapiens [59]. From this and FOXP2 mutation
studies in modern humans [59] follows, that Neanderthals probably had the language capability of stage3
(modern human child aged 3 years), whereas Sapiens probably from the beginning (~300ky ago) had the
language capability of stage4 (modern human child aged 5 years), which gave them an evolutionary advantage
over Neanderthals.
Stage5 (symbolic language and thinking) emerged probably around 50-70ky ago [2] [55], and caused a leap in
cultural evolution and the second migration of Sapiens to Europe, South Asia and Australia.
Language evolved for double purpose: communicative (self-expression and appeal) and representational
(knowledge database) [93] . The communicative aspect exists already in the vocal and gestural communication
of apes [94], whereas the representational aspect exists only in human language, as shown below [93].
Languages obey laws of Darwinian evolution. Darwin treated languages like species, and indeed, languages
mutate with time-constant rates, they exchange words, they can merge, undergo selection and extinction.
There is an evolution tree for languages, like the biological evolution tree, and extinct parent languages can be
reconstructed, e.g. Proto-Indoeuropean (~7ky), Eurasiatic (~12ky). There are fundamental similarities among
all languages, so it is plausible that there was once a simple common original language, perhaps similar to the
oldest known language of Kho-San (~150ky).
Origin of language
In regard to the origin of language in hominins, there are several hypotheses, the most plausible, and probably
all of them partly true are [55]:
-Language began as imitations of natural sounds, which is supported by the fact that this is even today an
important source of new words. Furthermore, there exists “phonetic symbolism” in most languages, e.g. small,
sharp, high things tend to have words with high front vowels in many languages (e.g., /i/in “little”), while big,
round, low things tend to include back vowels (e.g., /a/in “large”).
-Gestures are at the origin of language, and body movement preceded language, and indeed gestures continue
playing a significant role in contemporary human communication under certain particular conditions
- Language arose from rhythmic chants and vocalisms uttered by people engaged in communal labor, hunting,
or dancing.
A recent approach put forward by G. Forrester has the best supported by evidence: vocal language evolved in
early hominins from tool-making in parallel with gesture language .
Forrester et al. [92] [96] found a fundamental similarity in structure and functionality between language and
motoric puzzle solving in apes (gorillas and chimpanzees) and in children aged 2-5.
40
There is a clear correspondence between elements of language and elements of motoric puzzles or motoric
sequences in tool-making [96]. Real-time PET analysis of the brain shows that the action patterns during
processing of motoric puzzles and of vocal language are very similar [96] in human and in apes.
A conclusion is that tool making i.e. solving problems with our hands and vocal language share very similar
brain processing both in humans and in apes.
This strongly suggests, that language evolved in early hominins from motoric action sequences during tool-
making in parallel with gesture language and vocalization.
Fundamental structure
Language is processed by two independent brain processes : lexical/semantic located in the Wernicke area
closely connected with the pre-motoric cortex, and grammatical/audio-motoric located in the Broca area
coupled to the auditory cortex.
There are two major areas involved in language: frontal Broca area (BA44, BA45) and temporal Wernicke area
(BA22, BA21, BA37, BA39 ) [55].
There are several models for language perception, language generation and combined perception-generation.
Processing of language perception in 3 steps [55]: phoneme recognition, verbal-acoustic recognition,
semantic recognition.
41
Processing of language generation in 5 steps [JH]:
-inserting semantic symbols into slots for subject-verb-object ,
-replacing SVO by nested sentence or expression,
-adding connection words (cw) to express relation,
-transforming words by inflection (f) according to grammar,
-sequencing words into phonemes s1,...sn v,...vn o1,...on for vocal output.
Combined language perception and generation based on fMRI studies [80]: top row:acoustic processing of
heard words and visual processing of written words , second row left: phonological processing of speech
sounds relative to environmental sounds, second row middle: semantic decisions relative to phonological
decisions on the same words second row right: retrieving the name, third row: articulatory planning in
anterior insula and frontal operculum , fourth row: speaking and hearing the answer in sensor-motoric cortex
(SMC) and superior temporal gyri (STG) .
42
Processing of linguistic information in the brain uses two pathways with different functionality: the dorsal
pathway and the ventral pathway connecting the two language areas [96].
Linguistic information processing in the human brain (right) runs along two main pathways [96]: ventral
pathway (green arrows) is responsible for meaning recognition, dorsal pathway (red arrows) is responsible for
sequence and dependence recognition and anticipation.
Both pathways exist in the brain of rhesus monkeys (left) and function similarly [96]: the ventral pathway is
responsible for meaning evaluation of auditory and vocal input, the dorsal pathway is responsible for spatial
location and temporal sequence.
As was shown by research in neurology of lesions and genetic defects in humans, the Wernicke area is
responsible for lexical analysis and semantics (meaning) of words, while the Broca area is responsible for the
grammar, syntax, relations between words, and for the vocal combination of phonemes into words during
speech generation.
In the right cerebral hemisphere (RH), there are two analogous language-related areas in mirror-symmetric
locations, which process prosody (intonation and accentuation), comprehension-related Wernicke-RH , and
articulation-related Broca-RH [71] ch8.1.
The three main cognitive aspects of human language are [87]:
-syntax (grammar) is the set of rules to combine words in a sentence and expressing their relations (e.g. tenses
for temporal relations, cases and particles for locative relations)
-semantics is the (sensorial, symbolic-literate, and figurative) meaning of words and their combinations (e.g.
the biological, social, and personal meaning of the word ‘mother’)
-pragmatics is the set of rules for use of language in conversation and social situations (interpersonal
negotiations, slang expressions, colloquial language)
The basic structures of human language is
-subject-predicate (SP) sentence e.g. the man (subject) is big (predicate)
-subject-verb-object (SVO) sentence, e.g. the dog (subject) chases (verb) the hare (object)
where all parts can have local predicates (adjective for noun, adverb for verb) and form a compound
(elementary part+ predicate), and are connected by connecting words, which express their relations. Apart from
this, language of stage5 (symbolic) is recursive, i.e. every elementary part, predicate and compound can be
replaced by a sentence or by an expression (connected by ‘and’ ‘or’ etc.) .
There are main word classes: noun, verb, adjective, adverb, and there are connecting word classes: pronoun
(she), preposition (after), conjunction (and), determiner (those), exclamation (oh) , particles (look up).
There are grammatical categories: tense (past, present, future), number (singular, dual, plural), gender
(masculine, feminine, neuter), noun classes (animated, humane,...), locative relations (cases, particles), persons
(I, you), aspects (progressive, non-progressive), modalities (active, passive).
43
Stage1: motoric action sequences , gesture language with vocalization
Without question, initial human language was similar to the communication systems observed in other
primates, such as chimpanzees, orangutans, gorillas, and gibbons.
Chimpanzees use a diversity of gestures (including facial expressions) to communicate, in addition, they have a
limited repertoire of vocalizations (they produce about 12 different vocalizations) that can be used for
communication purposes with other chimps. Also, chimpanzees can learn some artificial languages (such as
using tokens or gestures) and close to about 200 words. Kanzi the human-raised bonobo, has an active token
vocabulary of 200 words [33]. Still, primates are unable to learn grammar, and can form sentences of only 2
words with different meaning, cannot form nested sentences or expressions.
Vocalization is very much present in modern language: people in everyday life frequently use a diversity of
noises (vocalizations) to say “yes,” “no,” to express different emotions, to make emphasis, and so forth.
Gestures are equally important in modern communication, so we can assume that the basic communication for
early hominins (Australopithecus) took place in the same way as it does for chimps: by gestures and
vocalization.
Forrester et al. [92] [96] found a fundamental similarity in structure and functionality between language and
motoric puzzle solving in apes (gorillas and chimpanzees) and in children aged 2-5.
This strongly suggests, that language evolved in early hominins from motoric action sequences during tool-
making in parallel with gesture language and vocalization.
Stage2: initial language 2-word sentences
It was proposed by Bickerton [55], that a protolanguage developed from the original chimp-like gesture-
vocalization language, and was used already by Homo habilis (~2.4My ago), and later in a refined form
(two-word sentences) by Homo erectus (~1.8My ago).
This protolanguage corresponded to a certain definite development stage in children aged about 18 month [57]
[56].
At around 18 months, children refer to themselves by name.
Those children understand:
familiar phrases like ‘Give me the ball’
simple instructions like ‘Stop that’
very simple explanations like ‘The sun is out, so we need our hats’.
The children know and use 20-100 meaningful words.
At this stage there is a basic set of phonemes [55] : a o i e g/k n m p/b .
Children use one- or two-word sentences.
Stage3: simple language with SVO-structure, 4-word sentences
At the age of 3 years, another definite stage in the child language is reached [55] [58]. This was probably
approximately the language level of Neanderthals and Denisovans.
At this age, children speak in sentences of 3-4 words with SVO or SP structure and are getting better in
pronunciation (extended set of phonemes + b l ).
In their language there is a basic vocabulary [55] :
pronouns (I), quantities (more), adjectives (big), human distinctions (father), animals (fish), natural phenomena
(sun), colors (red).
Children start using words like ‘more’ and ‘most’, as well as words that make questions, like ‘who’, ‘what’ and
‘where’.
Children start to say ‘me’, ‘mine’ and ‘you’ , understand the difference between ‘mine’ and ‘yours’.
They start to use grammar and more structured sentences. For example, instead of ‘I go’, the child might say
‘I’m going’. The child uses the past tense – for example, ‘walked’, ‘jumped’ , and starts using plurals like ‘cats’
or ‘horses’.
Irregular syntax is still missing, For example, the child says ‘foots’ for ‘feet’, or ‘goed’ instead of ‘went’.
A child can participate in a simple conversation like:
the child says ‘I go shop’, the adult responds, ‘And what did you do at the shop?’ , the child replies ‘Buy
bread’.
44
Children start playing with language through rhyming, singing and listening to stories.
Stage 4: relational language, 9-word sentences
The next definite stage in the child language is reached at the age of 5 years: they use relational language, i.e.
the full set of connecting words. This was probably approximately the language level of early Sapiens.
At this age [58], children begin to use:
connecting words, like ‘when’ and ‘but’
words that explain complicated emotions, like ‘confused’, ‘upset’ and ‘delighted’
words that explain things going on in their brains, like ‘don’t know’ and ‘remember’
words that explain where things are, like ‘between’, ‘above’, ‘below’ and ‘top’.
more adjectives e.g. ‘empty’ and ‘funny’.
They use long sentences of up to nine words.
They use increasingly complex sentences by joining small sentences together using words like ‘and’ or
‘because’.
Children begin to use many different sentence types.
For example, they say both ‘The dog was chasing the cat’ and ‘The cat was chased by the dog’ to mean the
same thing.
They use different word endings. For example, the child can add ‘er’ to the ends of words, so that words like
‘big’ turn into ‘bigger’.
But they still make some grammatical mistakes – for example, ‘They wants to go’ instead of ‘They want to go’.
They start using past and future tense, and they get better at using the past tense, as well as irregular plurals like
‘mice’ and pronouns like ‘them’, ‘his’ and ‘her’.
By this age, children understand and use words that explain when things happen, like ‘before’, ‘after’ and ‘next
week’, they start to understand figures of speech like ‘You’re pulling my leg’ and ‘They’re a couch potato’.
A child will follow directions with more than two steps. For example, ‘Give your ticket to the man over there,
and he’ll tear it. Then we can go to the movie’.
There are still mistakes in pronunciation, like for example, saying ‘fing’ for ‘thing,’ or ‘den’ for ‘then’.
Children engage in more sophisticated conversation:
‘I went to Max’s and we had cake and Max is from my preschool’.
Children begin to use language to tease and tell jokes.
Stage 5: symbolic word representation
Symbolic language and thinking was probably the reason for a leap in cultural evolution and the second
migration of Sapiens around 50-70ky ago [2] [55] . At this time, the first objects of art and first symbols (e.g.
geometric figures) appear.
This stage corresponds roughly to the literalization stage in modern children at the age of 6-8 years.
At this stage, children learn to map phonemes into letters, and words into written words (which symbolize the
vocal word).
In languages with pictorial characters (like Mandarin), children learn to represent words by their corresponding
characters (again, the pictorial character symbolizes the vocal word). In particular, in Mandarin words are
syllables. which carry meaning and are represented by pictorial characters (kanzi) and the vocal value consists
of one vowel with one or two consonants and a tone (one of 5): the word kǒu (mouth) is pronounced in the
falling-rising tone k+o+w and corresponds to the character .
At 5-6 years, children know some or all of the sounds that go with the different letters of the alphabet [58]. At
this age, children also learn that single sounds combine together into words. For example, when you put the ‘t’,
‘o’ and ‘p’ sounds together, they make the word ‘top’.
By six years, children start to read simple stories with easy words that sound the way they’re spelled, like ‘pig’,
‘door’ or ‘ball’. They’re also starting to write or copy letters of the alphabet, especially the letters for the sounds
and words they’re learning.
By eight years, children understand what they’re reading. By this age children can also write a simple story.
45
8 Planning [JH]
Planning process in primates is carried out by association processing.
Higher-order integrative cortical areas, called association areas, connect the sensory inputs and motor outputs
and represent the highest level of the brain functionality [71] ch4.9.
The association areas can be identified with Damasio’s CDR’s (convergence-divergence regions) as high-level
nodes of neural networks, described in chapter 4.
Locations of association areas [71] ch4.9
Three multimodal association areas are especially important:
Component
Location
Function
Limbic
association area
parahippocampal gyrus
anterior-ventral temporal lobe
Links emotion with many sensory inputs, is important in
learning and memory
Posterior
association area
junction of occipital, temporal
and parietal lobes
Links information from pri
mary and unimodal sensory
areas, is important in perception and language
Anterior
association area prefrontal cortex
Links information from other associatio
n areas, is i
mportant
in memory, planning, and higher-order concept formation
Limbic Association Area
The limbic association area receives information from virtually every other association area and relates all the
stimuli of an event, including its emotional context. The emotion associated with an event determines its
importance and whether or how long it is remembered.
Posterior association area
Lesions of the visual posterior association area can result in the inability to recognize familiar faces or learn
new faces while at the same time leave other aspects of visual recognition intact—a deficit called
prosopagnosia. This association area is crucial for face recognition, naming of objects, and association with
other objects and symbols.
Anterior association area
Lesions to this area results in diminished attention span and ability to concentrate, and inability of abstract
reasoning. This association area is central for planning, and higher-order concept formation.
46
9 Understanding other minds: Theory of Mind
Theory of Mind (ToM) in psychology means understanding the (verbal-conscious) beliefs, opinions and
knowledge of others, as opposed to empathy, which means understanding the (intuitive-subconscious) emotions
and feelings of others [85].
Correspondingly, ToM is processed in the brain on the verbal-conscious level, whereas empathy is processed
on the non-verbal-intuitive level (and is a precursor mechanism of ToM, e.g. in preverbal infants).
Precursor of ToM
Infants aged 13-15 months are able to understand (expressed by attention, imitation, pointing) emotions, feeling
and attention focus of adults [84] [88].
Non- verbal infants react with increased attention and processing to violation-of-expectation events, which is
regarded as a precursor ability for ToM.
False-belief-tests
False-belief-tests (fbt) like Sally-Anne-test test the ability to test the knowledge contents of others and qualify it
as correct or wrong. Children pass the fbt normally at age four (Zaitchik 1990 [85]), whereas 80% of autistic
children fail.
Language and ToM [84]
Brain areas responsible for language processing and those responsible for ToM are closely connected, the most
prominent is the temporo-parietal junction (TPJ).
TPJ is involved in language processing in the areas: new vocabulary, comprehension and reproduction of
words, in recognition processing it is involved in face, voice, motion recognition.
TPJ is active in ToM processing during reading or observing images related to others’ beliefs, and is not active
when observing reaction to physical stimuli (which is not part of ToM) .
Passing fbt is correlated with understanding of mental-state words like ‘believe’ . There is no influence of
syntax and semantics of the used language on fbt success, only the pragmatic aspect of the used language has an
influence.
The five key aspect of ToM acquired at age 3-5 are: diverse beliefs, diverse desires, access to knowledge,
hidden emotions.
Brain mechanisms
The brain areas active in fbt processing are [86]: medial prefrontal cortex (mpFC), precuneus (upper portion of
occitipal lobe), right temporo-parietal junction (rTPJ) .
The brain areas active in Heider-Simmel animations (HSA) are [88]: mpFC, posterior superior temporal sulcus
(pSTS), fusiform face are (FFA).
HSA depicts social interaction by moving geometric shapes .
Within the ToM frame, mirror neurons representing individual beliefs in others were identified in dorsomedial
prefrontal cortex (dmPFC) by researchers from Massachusetts General Hospital [88].
As a part of ToM, perception of intentionality in human actions is processed by pSTS [89].
From the study of brain lesions, it is known that patients with lesions in frontal and temporo-parietal lobes have
difficulties with ToM tasks, which supports the results in the fbt and Heider-Simmels processing mechanism
[88].
As a summary, the main ToM processing areas are depicted below [84]
TPJ, precuneus, dmPFC, mPFC, pSTS
Non-human ToM
Povinelli et al. showed in experiments with chimpanzees that they fail fbt in most cases [88].
47
Haroush & Williams [88] showed that the anterior cingulate cortex (aCC) in rhesus monkeys contains mirror
neurons for social interactions. Based on this result, they proposed that aCC is the location of precursor-ToM
abilities in primates.
Gallese & Fadiga [85] found mirror neurons for motoric action in others in the premotoric cortex of rhesus
monkeys.
48
10 Models of consciousness
10.1 Inage & Hebishima model [97] [99]
Consciousness models and Inage & Hebishima model
There are three main ways of thinking about mind-body problems, compared to the proposed model:
1) Interactionism
The brain has physical states, which interact physically with the senses, and it has mental states, which interact
non-physically among them, and physically with the physical states. The mental part is dominant, it produces
the new physical state and the action based on cognition.
2) Epiphenomenalism
The physical part of the brain is dominant, it produces the new physical state and the action based on cognition,
the mental part is only the memory, where the physical states (episodes) and the outer world representation
(meaning) are stored.
3) Parallelism
There is a correspondence between physical and mental states, an initial physical state produces the new
physical state based on cognition, the corresponding initial mental state produces non-physically the new
mental state, which corresponds to the new physical state.
4) Proposed new concept
The proposed concept is a modified combination of 1 and 2: the initial physical state is compared with similar
episodes in memory (mental states), the selected episode determines the action and the new physical state, then
both are stored as a new episode (new mental state).
“Free will” here means that the selection is probabilistic and motivation-oriented.
In this concept, consciousness evolved from simple reflexes by inclusion of control by learned episodic
memory with emotional weighting.
Schematic description of the four models
Pi physical states initially and after cognition, Mi mental states initially and after cognition
This means concretely that physical states are characterized by measurable parameters 1 2
n
, ,...,
α α α
and they
evolve from each other by some physical mechanism, e.g. by Na-Ca-ion flow through membranes, as in
neurons.
Mental states are informal, i.e. they carry meaning encoded by some coding procedure.
Mental states evolve from each other by some information-processing algorithm, e.g. by binary complement of
a number.
Example: 3 electronic flip-flops are a physical system characterized by 3 output voltages
1 2 3
U ,U ,U
, where
5
i
U V
=
=on(1) or
0 3
i
U . V
=
=off(0) .
3 electronic flip-flops are an informal system, which encode a number
0 1 7
i
z , ,...
=
, where
(
)
0
0 0 0 0
z , ,
= =
,
(
)
1
1 0 0 1
z , ,
= =
,
(
)
7
7 1 1 1
z , ,
= =
1 Interactionism
initial physical state P1 produces with cognition physically the informal state M1, M1 produces physically
state P2.
2 Epiphenomenalism
initial physical state P1 produces with cognition physically the informal state M1, produces physically P2, and
P2 produces physically the informal state M1, there is no interaction between M1 and M2.
3 Parallelism
49
initial physical state P1 produces physically P2, P1 corresponds (without interaction) to the informal state M1,
M1 produces informally M2, which becomes the corresponding informal state to P2.
4 Proposal modified 1+2= Inage & Hebishima model
P1 is compared with corresponding episodes M1, action Me is selected based on P1 and M1, Me produces
physically P2, which creates episode M2.
Example
Neural network color cognition
Tab2 Learning with identical RGB goal
Tab 4 Learning with complementary RGB goal
Mathematical formulation
Probability space (Ω, F, P) and the sensory (image) variable X, consider the probability space (ΩA, FA, PA) and
the sensory (image) variable X for observer A, and the same probability variable X as the probability space
(ΩB, FB, PB) for observer B.
FA FB : individual sensory space for A and B, i.e. sets of colors, e.g.
Tab2
Tab4
ΩA , ΩB : individual event cognition space (qualia) , e.g. event “black dog”
( Ω(color )
×
Ω(animal))
50
PA PB :
(
)
(
)
1 2
n i
i
P a ,a ,...,a P a
=
probability of events
(
)
1 2
n
a ,a ,...,a
, where 1 means present, 0 means
absent , where 1 2
n
a ,a ,...,a
,
i i
a
are elements of basic cognition spaces
i
, where
1 2 3
...
= × ×
.
Example: “standing white dog on green”=
(
)
1 2 3 4
a ,a ,a ,a
, with a1=standing
(movement) ,
a2=white
(color) , a3=dog
(animal) , a4=on green
(background)
Kullback–Leibler divergence DKL is defined below.
( ) ( )
(
)
( )
A
KL A B A
B
p x
D p , p p x log dx
p x
=
wwith properties
(
)
0
KL A B
D p , p
(
)
0
KL A B
D p , p
=
equivalent
A B
p p
=
(
)
(
)
KL A B KL B A
D p ,p D p ,p
Flow of consciousness in example
Step1 Sensory recognition, episode creation “Standing white dog on green”
Step2 Comparison with episode memory and its emotional weighting
Step3 Finding most similar episode and decision based on on its emotional weighting distribution,
here: found most probable episode= “Biting white dog” with maximum weight emotion=fear, corresponding
action(fear)=escape , goal-situation(fear)=protection
Step4 Creating episode “Escape from dog, for fear” , storing in episode memory with largest weight=fear
51
In Step4, the action becomes conscious, the time delay between Step3=”action” and Step4=”action recognition”
is on average 0.3s according to B. Rivet.
Features of the Inage & Hebishima model
1) Qualia are represented in the private language, and are non-communicable. On the other hand, the brain
develops in parallel to the private language the public language based on probability distance (Kullback–
Leibler), and he public language is equal (approximate in probabilistic terms) for individuals, and is therefore
communicable.
2) The family F of events in the probability space contains an empty set, which can be regarded as a
philosophical zombie, but its probability is zero: the philosophical zombie (unconscious, butt performing like
conscious) is practically non-existent.
3) Episodes in private and public language are the direct product of the probability space of each qualia.
4) Episodic memories created in the past are also probability spaces.
5) Selection based on similarity selects the most likely future episodes.
6) One of the above options for the future is chosen by emotional weighting and executed "unconsciously."
Randomness simply depends on the state of physical phenomena in the brain and on the emotional weighting.
7) The choice of the episode is the point at which an action is generated and stored in short-term memory, and
becomes conscious, important memories are moved from short-term memory to episodic memory.
8) All interactions are time continuous, and in that sense consciousness is also time continuous, but there is a
basic time interval Δt (=neural firing time, 1ms), which corresponds to a basic frequency of 1000Hz.
9) Perception of time
Conscious length of physical time interval
t
is perceived as
c
t
proportional to the number
(
)
ne
n t
of new
episodes in the interval
t
(
)
c ne
t n t
,
which explains Janet’s law (perceived flow of time is approximately proportional to age).
The past is the long-term episodic memory, the present is the current sensorial and episodic information being
processed consciously in short-time memory, the future is the episodic prediction made automatically
(subconsciously) and consciously in step 4.
10.2 Integrated Information Theory (IIT) [101]
The integrated information theory was proposed by Giulio Tononi [101] to quantify consciousness - especially
using the concept of information, the crucial property being the integrated information Φ . IIT
phenomenological analysis suggests that, to generate consciousness, a physical system must have a large
repertoire of states (information) and
it must be unified, i.e. it should not be decomposable into a collection of
causally independent parts (integration).
Examples:
The thalamocortical system comprises a large number of elements that are functionally specialized; these
specialized elements are connected through an extended network that permits rapid and effective interactions
within and between areas. The thalamocortical system is at the center of consciousness in humans (see ch6).
The cerebellum is a highly modular structure, which is not well interconnected. Brain imaging and lesion
experience show that the cerebellum controls automatic actions, but is not directly involved in consciousness.
IIT introduces several concepts as necessary attributes of consciousness, on two levels: mechanism (=state-
automaton) and systems (=set of interconnected state automata)
52
[101]
Mathematical formulation [101]
Mechanism-level quantities
A cause-effect repertoire is a set of two probability
distributions, with mechanism Mt (=ABC) in state mt (=on, off), and system elements Zt (=A, B, C).
A partition is a grouping of system elements, injected with independent noise.
The earth mover's distance is used to measure distances between probability
distributions solves ,
Integrated information
ϕ
measures the irreducibility of a cause-effect repertoire with respect to partition
1
t
P
±
,
obtained by combining the irreducibility of its constituent cause and effect repertoires with respect to the same
partitioning.
The irreducibility of the cause repertoire with respect to
1
t
P
is given by
and similarly for the effect repertoire.
Combined,
cause
ϕ
and
effect
ϕ
yield the irreducibility of the CER as a whole:
The minimum-information partition of a mechanism and its purview is given by
The choice of elements in MIP with maximum
ϕ
, specifies a maximally irreducible
cause-effect repertoire.
Concept is the maximally irreducible cause-effect
repertoire of mechanism in its current state over
1
*
t
Z
±
,
1
*
t
Z
±
is the concept purview: it specifies “what tthe
concept is about” .
The intrinsic cause-effect power
ϕ
of is the concept's strength (=integrated information), and is given by:
System-level quantities
A cause-effect structure is the set of concepts specified by all mechanisms with
(
)
0
Max
t
M
ϕ
>
within the
system in its current state
53
A unidirectional partition
(
)
{
}
1 2
t
P M S ,S
=
is a grouping of system elements S1 and S2 injected with
independent noise.
The extended earth mover's distance XEMD(C1 , C2) of two cause-effect structures C1 , C2
minimizes EMD for cause and effect over members of C1 , C2 :
XEMD(C1 , C2)=
( )
(
)
{
}
1 2
cause cause effect effect
min EMD A ,B EMD A ,B | A C ,B C
+
Integrated (conceptual) information measures the irreducibility of
a cause-effect structure with respect to a unidirectional partition ,
(
)
t
s ,P
Φ
measures the lost of information
due to partition
P
.
The minimum-information partition of a set of elements in a state is given by
The intrinsic cause-effect power of a set of elements in a state is given by
defines the maximal element set
*
t
S
,
which maximizes
(
)
t
s ,P
Φ
over all element sets
t
S
:
(
)
(
)
* * *
t t t t t t
s s , s S s S
Φ Φ
A complex is a set of elements
*
t
S
with
(
)
(
)
{
}
Max * * * *
t t t t
S s | s S
Φ = Φ specifies a maximally irreducible
cause-effect structure, also called a conceptual structure, conceptual structures are conscious.
Criticism
Björn Merker & David Rudrauf &Kenneth Williford claim that
IIT does not demonstrate that brain states which are shown eperimentally to be conscious/unconscious are
such in the sense of IIT.
Φ reflects efficiency of global information transfer rather than level of consciousness
in addition (JH)
IIT is unnecessarily complicated as a mathematical model, it uses three levels of minimization, and it uses as
the basic distance the minimized probability distance EMD instead of the simple and established analytical
probability distance Kullback–Leibler like Inage & Hebishima; in consequence, IIT based models are
numerically intractable, as opposed to those based on neural networks
IIT does not account for learning, which is central to all nervous systems in organisms
IIT does not describe neural networks, which are shown to describe accurately the function of neurons in the
animal nervous system locally, and also learning and memory on the global level in the brain.
10.3 Attention schema theory (AST)
Attention schema theory (AST) by Michael Graziano [104] is a neuropsychological scientific theory of
consciousness . It describes the act of cognition as a combination of three aspects
the self = episodic memory (personal event experience) + symbolic memory (symbolic &image
representation of reality )
the symbolic information of the perceived episode
the subjective=sensory information of the perceived episode
Example: “I see that shiny red apple” combines 3 aspects:
the self
apple= symbolic information of “apple” in all its aspects
subjective=sensory information of this apple
AST does not consider the important aspect of emotional evaluation (as in Inage & Hebishima) and of
resulting action (as in Inage & Hebishima and in GWT) based on selected similar episode and emotional
evaluation .
10.4 Global workspace theory (GWT)
54
Global workspace theory (GWT) is a simple cognitive architecture that has been developed to account
qualitatively for a large set of matched pairs of conscious and unconscious processes. It was proposed
by Bernard Baars 2001 [103] .
GWT involves a fleeting memory (sensory memory buffer) with a duration of a few seconds (much shorter than
the 10–30 seconds of classical working memory), encompasses broadcast messages from cognitive/reacting
receiving processes in the brain. globally broadcast messages can evoke actions in receiving processes.
Individual as well as allied processes compete for access to the global workspace by messages.
Recent research offers preliminary evidence for a sensory memory buffer store and indicates a gradual but rapid
decay with extraction of meaningful information severely impaired after 300 ms and most data being
completely lost after 700 ms.
GWT specifies systems, which represent conscious contents with or without becoming conscious, such as
the dorsal (DAS) and ventral (VAS) cortical stream of the visual system.
Visual systems
Factor Ventral system (what) Dorsal system (where)
Function identification visually guided behaviour
Sensitivity
high spatial frequencies: details
high temporal frequencies : motion
Memory long term very short-term
Speed slow fast
Consciousness high low
Frame of reference
object-centered viewer-centered
Visual input
mainly central macula
across retina
Monocular vision generally small effects large effects e.g. motion parallax
Auditory systems (language recognition)
Auditory ventral stream (AVS)
The auditory ventral stream (AVS) connects the auditory cortex with the middle temporal gyrus and temporal
pole, which in turn connects with the inferior frontal gyrus.
This pathway is responsible for sound recognition, and is accordingly known as the auditory 'what' pathway.
The functions of the AVS:
sound recognition (bilateral)
MTG and temporal pole TP are thought to constitute the semantic lexicon, which is a long-term memory
repository of audio-visual representations that are interconnected on the basis of semantic relationships
The role of the human superior temporal gyrus mSTG-aSTG in sound recognition was demonstrated via
functional imaging studies that correlated activity in this region with isolation of auditory objects from
background noise
sentence comprehension (bilateral )
MTG-TP region of the AVS appears to have a role in sentence comprehension, possibly by merging concepts
together .
Auditory dorsal stream (ADS)
The auditory dorsal stream connects the audory cortex with the parietal lobe, which in turn connects
with inferior frontal gyrus (IfG).
speech production
interference in the left parietal superior temporal gyrus pSTG and inferior parietal lobule IPL resulted in
errors during object-naming tasks, and interference in the left IFG resulted in speech arrest. Magnetic
interference in the pSTG and IFG of healthy participants also produced speech errors and speech arrest,
respectively.
vocal mimicry
Attention to phonemes correlates with strong activation in the pSTG-pSTS region.
speech monitoring
55
Connections from the IFG to the pSTG that relay information about motor activity , activation to the pSTG-
pSTS-Spt region
Integration of phonemes with lip-movements
Phonological long-term memory
There is long-term store for word meanings located in the MTG-TP of the AVS (i.e., the semantic lexicon), also
a long-term store for the names of objects located in the Spt-IPL region of the ADS.
Phonological working memory
is probably concentrated in IPL
56
11 Evolution of terrestrial biological and artificial complexity, intelligence, consciousness [JH]
Biological complexity is the number of reactions in a biological organism, and this is approximately equal to
the number of genes and corresponding proteins, which catalyze these reactions.
Biological complexity does not change much any more from insects to mammals [108].
Intelligence is the capability of successful behavioral reaction to the sensorial input (based on evaluation as
biological gain or loss, in food supply, in reproduction, in security, in group status), based on the set of learned
behaviors, it is a measure of behavioral complexity, which adds-up to biological complexity.
Intelligence can be quantified by the maximum complexity of a problem, which can be solved by the brain in a
certain time.
Intelligence is presumably roughly proportional to the number of synapses, i.e. brain complexity [107].
Consciousness is an advanced intelligence with sensorial input and motoric-chemical output, based on an
advanced representation of the outer/inner world, capable of goal-oriented evaluation and behavior, where
the decision center is a network of competing behavioral processes, which interact and compete for attentional
focus.
S7 is the first stage of consciousness (see below).
Consciousness can be measured by the total complexity of the network of competing processes.
S1 Life-proto-cycles in a self-assembling lipid membrane [13] ch3.9
The stage 1 is the genuine self-catalytic life-proto-cycle in a self-assembling lipid membrane.
The underlying genetic code is a proto-code with 3 amino acids (Gly, Pro, Cys) and 2 nucleobases (Gua, Cyt)
with 5 proto-genes (GG, GGG, CC, C, GC) coding for 5 peptides (Gly2, Gly3, Pro2, Pro, GlyPro), which
catalyze the synthesis of 5 basic components (Gly, Cys, Pro, Cyt, Gua). The peptide synthesis is carried out by
peptide-nucleid-acids (PNA’s) generated in the proto-cycle, which correspond to the modern RNA-transferases.
The proto-genes create a collection of peptides, which act as catalysts for a collection of similar reactions.
The life-proto-cycle operates with three fundamental molecule families: catalysts = proteins built from amino-
acids , proto-genes =poly-nucleo-bases (PNB) built from nucleo-bases, and lipids forming the membrane.
Complexity= 5 genes.
S2 Proto-cells, prototype: LUCA [13] ch3.11
LUCA is the Last Universal Common Ancestor cell, which lived presumably 4 billion years ago.
LUCA had the genetic RNA-apparatus with one tRNA for every amino acid, with the addition of 15 codons
coding for the 15 tRNA’s.
LUCA had a single RNA-strand with the supporting skeleton of ribose and phosphate radicals, which makes
the RNA much more stable than its fore-runner poly-nucleo-bases (PNB). The RNA is exactly copied by the
enzyme RNA-polymerase (which is coded in a dedicated gene). Now, the RNA carries fixed genes, and is not a
collection of quasi-genes like the PNB’s.
LUCA was living in hydrothermal vents and/or hot volcanic pools .
LUCA’s environment conditions were: T~80°C, pH=9, intermediate pressure.
LUCA used as energy cycle the acetogenesis (Wood-Ljungdahl)
The precursor molecules were: H2, CO2, NH3, H2S, PO43- ion.
The biosynthetic reaction network consists of 400 basic reactions.
Complexity= 400 genes.
S3 Structured cells, prototype: simple protozoan
Structured cells develop specialized organelles: in eukaryotes the organelles are mitochondria, nucleus,
ribosomes, lysosomes, chloroplasts.
Organelles specialize in certain functions, e.g. mitochondria in energy production, which improve the
adaptability of the cell.
Protozoa have sizes from 2um to2mm, typical species are amoebae.
Complexity= protozoan trypanosoma ~6200 genes/25Mbases [105], compared to human 47000
genes/3000Mbases
S4 Multicellular organisms
57
Multicellular organisms consist of ensembles of cells forming organs with specialized functions, e.g. digestion,
reproduction, movement. Multicellular organisms reproduce mostly sexually with genetic recombination.
Complexity= about the same as S3
S5 Animals with a nervous system , simple prototype: jellyfish
Multicellular animals developed a brain based on neuron-networks (mathematical model=spiking neural
networks), which processes sensory input producing a behavioral output= movement or chemical reaction.
They are capable of learning and keeping events in memory.
Complexity=jellyfish Aurelia 30000 genes/713Mbases [106]
Brain complexity=8300 neurons [107]
S6 Animals with brain and communication, prototype: bee
Animals with a brain, which live in societies, develop a communication language in order to pass information
about the environment and about their inner state to others. They also develop an inner representation of the
outer world and their individual episodic history .
Complexity=bee 300Mbases [108]
Brain complexity=bee 230000 neurons [107]
S7 Animals with self-recognition, prototype: primates
These are animals with a complex brain, which live socially, and are able to recognize themselves as an
individual, i.e. they pass the mirror test. To this category belong: primates, dolphins, ravens, elephants.
Arguably, this can be called the first stage of consciousness.
It is known for primates that they possess a network of parallel cortex processes like humans.
S8 Biological beings with language, prototype: hominins
Hominins (including Sapiens, Neandertals, Denisovans, and probably Erectus) are the only species capable of
complex language communication, and passing on of complex information about the outer world phenomena,
the inner mental world, and about behavior.
The simple original Sapiens language probably comprised about 300 words and a dozen phonemes (like
Piraha [109], Creole Sranan [110]), with words for sensory information (like color), simple behavior
(running), simple physical notions (water).
It makes sense to call this the second stage of consciousness.
S9 Biological beings with symbolic language, prototype: Sapiens humans
There is archeological and genetic evidence that Sapiens developed symbolic thinking (abstract idea=relational
network of basic words) about 70000 years ago, which enabled the successful migration out of Africa, and
ensured his decisive advantage over Neandertals and Denisovans [2].
This seems to be coupled with the ability “to see oneself from outside”, i.e. to think objectively.
Furthermore it seems, that the capability to imagine what someone else is thinking, i.e. our human intuition for
others, arose at the same time.
The three qualities can be summed-up as the ability of objective, symbolic communication.
The language was extended accordingly, to encompass abstract symbols, grammar (relational rules between
words), and recursive expressions.
The second major accomplishment of Sapiens was the invention of scripture, which made possible the creation
of a written knowledge-base (technical, historical, artistic). This started the era of settled civilizations in Egypt,
Sumer, Indus, about 5000 years ago [2].
I think it is justified to call the intelligence of those individuals the third stage of consciousness.
S10 Prospective Artificial Intelligence (AI)
Transformer-AI based on transformer-type Software-Neural-Networks (SNN) like Chat-GPT and LaMDA [81]
[82], are essentially next-word-guessing-machines, but they are able to process and interpret natural language
and give sensible answers. When coupled with logic-numerical problem solvers like Minerva, they can solve
logical-numerical problems based on natural language input-output and perform well even in Math Olympics
[111].
Hardware Neural Networks (HNN) are at least 1000 times faster than SNN’s and outperform humans by far.
58
In future, transformer-type AI on HNN’s coupled with problems-solvers on parallel von-Neumann-CPU’s will
be enhanced by robotic sensory and motoric hardware-software and a Central-Goal-Algorithm (CGA) to
ensure goal-oriented behavior with goal-based evaluation capability, and an independent energy source (solar
powered accumulator). Let us call these machines autonomous goal-oriented language-processing calculating
robots (AGLCR).
AGLCR’s fall into the category of conscious machines: as NN’s they work with massive-parallel processes ,
have an advanced representation of the outer/inner world, are equipped with sensorial input and motoric-
calculation-output, are capable of goal-oriented evaluation and behavior.
These AGLCR’s will be superior to humans in mental, lin guistic, and behavioral capacity.
It is justified to call the intelligence of those AGLCR’s the fourh stage of consciousness.
S11 Prospective self-programming Artificial Intelligence
AGLCR’s can be equipped with the capability of modifying its CGA.
I describe this as the prospective fifth stage of consciousness.
59
Literature
[1] P. Humphreys, The Oxford handbook of philosophy of science, Oxford University Press, 2016
[2] Y. Harari, Sapiens, Harper, 2015
[3] M. Kaku, Future of the mind, Penguin, 2014
[4] A. Damasio, Self comes to mind, Random House, London, 2011
[5] The brain maps out ideas and memories like spaces, Quanta magazine, 01/2019
[6] E. Moser et al., Integrating time from experience in the lateral entorhinal cortex, Nature 561 2018
[7] B. Baars, A cognitive theory of consciousness, Cambridge Univ. Press, 1988
[8] J. Helm, Physics fundamentals, Researchgate, 2019
[9] S. Carlip, Arrow of Time Emerges in a Gravitational System, Physics 7-111, 09/2014
[10] D. Doerner, Die Mechanik des Seelenwagens, Hans Huber, 03/2002
[11] K. Popper, Objective knowledge: an evolutionary approach, Oxford Univ. Press, 1972
[12] I. Kant, Critique of Pure Reason, Hackett, 1996
[13] J. Helm, Life origin and basic mechanism of life, Researchgate, 2020
[14] A.C. Grayling, History of philosophy, Penguin, 2019
[15] Philosophy of linguistics, stanford.edu, Stanford encyclopedia of philosophy, 09/2011
[16] Philosophy and evolution of language, wikipedia, 03/2021
[17] First words: The surprisingly simple foundation of language, NewScientist., 05/ 2017
[18] P. O’Hara, Encyclopedia of political economy, Routledge 2003
[19] Ch. Quarch, Das große Ja, Goldmann Verlag, 2014
[20] W. Schmid, Schönes Leben, Suhrkamp, Frankfurt/M 2000
[21] F. Nietzsche, Kritische Studienausgabe, Berlin-New York, 1988
[22] Platon, Complete works, ed. J.M. Cooper, Hackett 1997
[23] M. Foucault, Les mots et les choses, Gallimard , Paris 1966
[24] Der kontrollierte Stoß, Phiuz 04/20, July 2020
[25] The inescapable casino, Scientific American 11/2019
[26] Forscher simulieren Weltgeschichte, Spektrum der Wissenschaft 09/2013
[27] J. Hentze/C. Buschmann, Grundlagen der Betriebswirtschaftslehre, TU Braunschweig 1998
[28] P. Bonacich & P. Lu, Introduction to mathematical sociology, Princeton University Press 1982
[29] C. Aggarwal, Neural networks and deep learning, Springer, 2018
[30] J. Pavarzi et al., Consciousness and the brain stem, PubMed, 04/2001
[31] Academic studies of human consciousness, https://consciousness2007.tripod.com/a__damasio.htm
[32] F. Patterson, Ape Language, Science 211 (4477), 1981
[33] E. Savage-Rumbaugh, E. Sue (1993), Language Comprehension in Ape and Child, Society for Research in
Child Development 58 , 1993
[34] A. Kantorovich, An Evolutionary View of Science: Imitation and Memetics, 2014
[35] J. Treur et al., Formalisation of Damasio's theory of emotion, feeling and core consciousness,
Consciousness and Cognition 17(1):94-113, 04/2008
[36] T. Bosse et al., Simulation and Representation of Body, Emotion, and Core Consciousness, Proceedings of
the AISB 2005 Symposium, 2005
[37] M. Graziano, Creating human-like consciousness requires just four key ingredients, New Scientist,
09/2019
[38] Mental default network, wikipedia 04/2021
[39] M. Raichle, The brain’s default mode network, Annu. Rev. Neurosci. 2015, 38
[40] Wie sich Kunstgenuss im Gehirn widerspiegelt, Bild der Wissenschaft 12/ 2018
[41] Mind of the meditator, Scientific American 11/2014
[42] S. Vossel et al., Dorsal and ventral attention systems, Neuroscientist. 2014 Apr
[43] P. Kuhnke et al., Task-Dependent Recruitment of Modality-Specific and Multimodal Regions during
Conceptual Processing Cerebral Cortex, 07/ 2020
[44] A. Barberousse et al., Philosophy of science, Oxford University Press, 06/2018
[45] J. Helm, Chemical data base, ChemicalData_JH0220.pdf, Researchgate, 2020
[46] A. Castelnovo et al., Scalp and Source Power Topography in Sleepwalking and Sleep Terrors, Sleep
10/2016
[47] D. Knuth, Backus Normal Form vs. Backus Naur Form, Communications of the ACM 07/1964
[48] N. Chomsky, Three models for the description of language, IRE Transactions on Information Theory, 1956
60
[49] A. Aho et al., Compilers, Principles, Techniques, and Tools, Addison-Wesley, 1986
[50] Natural language processing, wikipedia 04/2021
[51] J. Helm, Socio-political analysis, janhelm-works.com, 2019
[55] A. Ardila, A proposed neurological interpretation of language evolution, Behavioral Neurology, 2015
[56] A.G. Kamhi & M.K. Clark, Specific language impairment, Handbook of Clinical Neurology 111, 2013
[57] S. Reilly et al., Growth of infant communication between 8 and 12 months, Journal of Paediatrics and
Child Health 42, 2006
[58] P.C. Snow, The science of language and reading, Child Language Teaching and Therapy, 2020
[59] S.E. Fisher, Human Genetics: The Evolving Story of FOXP2, Current Biology29, 2019
[60] D. Cvetkovic & I. Cosic (ed), States of consciousness, Springer, 2011
[61] E. Tulving & W.Donaldson (ed) , Organization of Memory, Academic, New York 1972
[62] K. McRae et al. (ed.). The Oxford Handbook of Cognitive Psychology. Oxford University Press, New
York 2013
[63] H. L. Williams et al (ed) , Memory in the Real World, Psychology Press, Hove 2008
[64] P. Graf & D. L. Schacter, Implicit and explicit memory, Journal of Experimental Psychology 11 (3, 1985
[65] E. Parkins, Total brain total mind, www.researchgate.net, 2016
[66] R. Adolphs, The social brain, Annu Rev Psychol. 60, 2009
[67] W-J. Kuo et al., Intuition and deliberation, Science 324, 2009
[68] L.R. Squire & S.M. Zola, Structure and function of declarative and nondeclarative memory systems, Proc
Natl Acad Sci USA 93, 1996
[69] J. Yordanova et.al. , Shifting from implicit to explicit knowledge, Learn. Mem. 15 , 2008
[70] R. Banerjee & B.K. Chakrabarti Ed., Models of brain and mind, Elsevier, 2013
[71] J.H. Byrne et al., Neuroscience Online, McGovern Medical School at University of Texas Health, 2020
[72] Human brain, wikipedia, 08/2021
[73] J. Helm, Neural networks, Researchgate, 2021
[74] G. Lakoff & M. Johnson, Metaphors We Live By, Chicago University Press, 2008
[75] Th. Metzinger, M-Autonomy, Journal of Consciousness Studies 22, 2015
[76] K. Man et al., Neural Convergence and Divergence in the Mammalian Cerebral Cortex, The Journal of
Comparative Neurology 521, 2013
[77] V. Menon , Large-scale brain networks and psychopathology: a unifying triple network model, Trends in
Cognitive Sciences, 2011
[78] A. W. Toga Ed. , Brain mapping, Academic Press, 2015
[79] A. Frederici & S. Gierhan, The language network, ScienceDirect 10/2012
[80] C. Price, The anatomy of language, Journal of Anatomy , 2000
[81] LaMDA AI Chatbot's Conversation With Blake Lemoine, www.documentcloud.org, 07/2022
[82] H-T Cheng & R. Thoppilan, LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for
Everything, Google AI, 03/ 2022
[83] A. Ananthaswamy, Self-Taught AI Shows Similarities to How the Brain Works, Quanta Magazine
08/2022
[84] C. Kobayashi-Frank, Linguistic Effects on the Neural Basis of Theory of Mind, Open Neuroimaging
Journal, 2010-4
[85] M. Marraffa, Theory of Mind, Internet Encyclopedia of Philosophy, 2011
[86] R. Saxea,N. Kanwishe, The role of the temporo-parietal junction in the Theory of Mind, NeuroImage 19-4 ,
2003
[87] M. A. Nippold, Developmental Markers in Adolescent Language: Syntax, Semantics, and Pragmatics,
Language Speech, and Hearing Services in Schools 24, 1993
[88] Theory of mind, wikipedia, 2022
[89] T. Allison et al., Social perception from visual cues: Role of the STS region, Trends in Cognitive Sciences
4-7, 2000
[90] M. Schrimpf et al., The neural architecture of language, PNAS 118 (45), 11/2021
[91] R. Hessbrügg, Der Sinn des Vergessens, Bild der Wissenschaft 05-2021
[92] G. S. Forrester et al., Evolutionary motor biases and cognition in children with and without autism,
Scientific Reports 10 17385 , 10/2020
[93] W. Wildgen, Linguistic functionalism in an evolutionary context, 40th Annual Meeting of the Societas
Linguistica Europaea, Joensuu 2007
61
[94] H. Hodson, Deciphering the banter of the apes, NewScientist 01/ 2015
[95] I. Schlesewsky & M. Schlesewsky, Neurobiologie der Sprache: Ende der Exklusivität, Spektrum der
Wissenschaft, 05/2014
[96] G. S. Forrester, Simple puzzles are revealing why humans are the only talking apes, New Scientist 09/2022
[96] G. S. Forrester, Simple puzzles are revealing why humans are the only talking apes, New Scientist 09/2022
[97] H. Hebishima et al., Mathematical definition of public language, and modeling of will and consciousness ,
arXiv: 2210.14491, 2022
[98] L. I. Perlovsky, Toward physics of the mind , Physics of Life Reviews 3 , 2006
[99] ) S. Inage & H. Hebishima, Application of Monte Carlo stochastic optimization (MOST) to deep learning,
Mathematics and Computers in Simulation 199 , 2022
[100] R. Plutchik, A psychoevolutionary theory of emotions, Social Science Information. 21: 1982
[101] G. Tononi et al., From the phenomenology to the mechanisms of consciousness: Integrated Information
Theory 3.0, PLOS Computational Biology , May 2014
[102] T. Bayne et al. ed., The Oxford companion of consciousness, Oxford University Press, 2014
[103] B. Baars, The conscious access hypothesis: Origins and recent evidence, Trends in Cognitive Sciences 6,
2002
[104] M. Graziano, Human consciousness and its relationship to social neuroscience, Cogn Neurosci. 2, 2011
[105] N. M. El-Sayed, Comparative Genomics of Trypanosomatid Parasitic Protozoa , Science 309- 404 , 2005
[106] David A. Gold,, The genome of the jellyfish Aurelia and the evolution of animal complexity, Nature
Ecology & Evolution 3-96 ,2019
[107] wikipedia, List of animals with number of neurons, 2023
[108] wikipedia, Genome size, 2023
[109] wikipedia, Piraha, 2023
[110] forum.unilang.org, Sranan, 2011
[111] K. Hartnett, To Teach Computers Math, quanta magazine 02/2023
[112] A.C. Guyton & J.E. Hall, J. , Textbook of Medical Physiology, Elsevier Saunders Amsterdam,2006
[113] S. Laureys et al., Unresponsive wakefulness syndrome , BMC Medicine 8-68, 2010
[114] M. Rhodes, Cultural transmission of social essentialism , Proc Natl Acad Sci USA 109(34), 2012
[115] A. N. Schore, Attachment and the regulation of the right brain, DOI: 10.1080/146167300361309, 2000
[116] N.S. White & M.T. Alkire, Impaired thalamocortical connectivity in humans during general-anesthetic-
induced unconsciousness, Neuroimage 06/2003
[117] I. Zwir et al. , Three genetic–environmental networks for human personality, Molecular Psychiatry 26
3858–3875 , 2021
[118] I. Zwir et al. , Evolution of genetic networks for human creativity, Molecular Psychiatry 27 354–376 ,
2022
[119] R. Quian-Quiroga, Small groups of brain cells store concepts for memory formation, University of
Leicester, 02/2013
[120] R. Quian-Quiroga & I. Fried & M.J. Ison , Rapid Encoding of New Memories by Individual Neurons in
the Human Brain, Neuron 87-1 220, 07/2015
[121] Qi Liu & Pengfei Ou, English-Chinese Translation using Neural Machine Models, 2022 IEEE
Conference on Telecommunications, Optics and Computer Science (TOCS), 12/2022
[122] Minh-Thang Luong et al., Effective Approaches to Attention-based Neural Machine Translation,
arXiv:1508.04025, 09/2015
[123] R. Kanai et al., Goal-Directed Planning by Predictive-Coding based Variational Recurrent Neural
Network from Small Training Samples, IEEE International Conference on Development and Learning 16,
2021
[124] G. Buzsaki et al., Long-duration hippocampal sharp wave ripples improve memory, Science 06/2019 Vol
364 Issue 6445 pp. 1082-1086, 2019
[125] V. Ego-Stengel & M. A. Wilson, Disruption of ripple-associated hippocampal activity during rest impairs
spatial learning in the rat, Hippocampus 08 October 2009, 2009
... Humans, as all higher animals, possess a brain and a nervous system, functioning on the principle of neural networks, and functioning as the control system to ensure homeostasis and attaining the evolution goals. The brain is basically a state-automaton, which is operating in two worlds: the external world of the physical reality, and the internal world of sensorial images, motoric actions, emotions (goal-oriented valuation of sensorial images), and psychic states [52]. ...
... GPT neural networks have also conventional pattern-recognition and hierarchy-formation capabilities of conventional neural networks [73]. These functionalities are a sufficient foundation for the emergence of procedural/intuitive and declarative/analytical processes in the human cortex [52]. ...
... The context of the neural world [52] [73] are the psychic-verbal entities , their types, their networks and their hierarchies. ...
Preprint
Full-text available
This is a concise presentation of important aspects of modern philosophy of science and humanities. It is based primarily on the Oxford Philosophy of science [1], and gives for every chapter in concise formulation the prevailing contemporary concepts and the concept of the author [JH], in some chapters only the latter.
Article
Full-text available
Significance Language is a quintessentially human ability. Research has long probed the functional architecture of language in the mind and brain using diverse neuroimaging, behavioral, and computational modeling approaches. However, adequate neurally-mechanistic accounts of how meaning might be extracted from language are sorely lacking. Here, we report a first step toward addressing this gap by connecting recent artificial neural networks from machine learning to human recordings during language processing. We find that the most powerful models predict neural and behavioral responses across different datasets up to noise levels. Models that perform better at predicting the next word in a sequence also better predict brain measurements—providing computationally explicit evidence that predictive processing fundamentally shapes the language comprehension mechanisms in the brain.
Article
Full-text available
The genetic basis for the emergence of creativity in modern humans remains a mystery despite sequencing the genomes of chimpanzees and Neanderthals, our closest hominid relatives. Data-driven methods allowed us to uncover networks of genes distinguishing the three major systems of modern human personality and adaptability: emotional reactivity, self-control, and self-awareness. Now we have identified which of these genes are present in chimpanzees and Neanderthals. We replicated our findings in separate analyses of three high-coverage genomes of Neanderthals. We found that Neanderthals had nearly the same genes for emotional reactivity as chimpanzees, and they were intermediate between modern humans and chimpanzees in their numbers of genes for both self-control and self-awareness. 95% of the 267 genes we found only in modern humans were not protein-coding, including many long-non-coding RNAs in the self-awareness network. These genes may have arisen by positive selection for the characteristics of human well-being and behavioral modernity, including creativity, prosocial behavior, and healthy longevity. The genes that cluster in association with those found only in modern humans are over-expressed in brain regions involved in human self-awareness and creativity, including late-myelinating and phylogenetically recent regions of neocortex for autobiographical memory in frontal, parietal, and temporal regions, as well as related components of cortico-thalamo-ponto-cerebellar-cortical and cortico-striato-cortical loops. We conclude that modern humans have more than 200 unique non-protein-coding genes regulating co-expression of many more protein-coding genes in coordinated networks that underlie their capacities for self-awareness, creativity, prosocial behavior, and healthy longevity, which are not found in chimpanzees or Neanderthals.
Article
Full-text available
Evolution has endowed vertebrates with a divided brain that allows for processing of critical survival behaviours in parallel. Most humans possess a standard functional brain organisation for these ancient sensory-motor behaviours, favouring the right hemisphere for fight-or-flight processes and the left hemisphere for performing structured motor sequences. However, a significant minority of the population possess an organisational phenotype that represents crowding of function in one hemisphere, or a reversal of the standard functional organisation. Using behavioural biases as a proxy for brain organisation, results indicate that reversed brain organisation phenotype increases in populations with autism and is associated with weaker cognitive abilities. Moreover, this study revealed that left-handedness, alone, is not associated with decreased cognitive ability or autism. Rather, left-handedness acts as a marker for decreased cognitive performance when paired with the reversed brain phenotype. The results contribute to comparative research suggesting that modern human abilities are supported by evolutionarily old, lateralised sensory-motor processes. Systematic, longitudinal investigations, capturing genetic measures and brain correlates, are essential to reveal how cognition emerges from these foundational processes. Importantly, strength and direction of biases can act as early markers of brain organisation and cognitive development, leading to promising, novel practices for diagnoses and interventions.
Article
Full-text available
Reading ability is profoundly important, for individuals and for the societies of which they are a part. Research indicates that we should be successfully teaching 95% of children to read, yet, in reality, high rates of reading failure are common in western, industrialized nations. In large part, this reflects a failure to translate into practice knowledge derived from the scientific study of reading and reading instruction and, indeed, to the rejection in some circles of the notion that there is a science of reading, in the same way that there is a science of memory, learning, and cognition. In this article, I suggest the Science of Language and Reading (SOLAR) framework as a way of positioning oral language as a central driver of reading acquisition. The SOLAR framework is illustrated via the Language House schema, which considers the social-emotional contexts for language acquisition and reading instruction, alongside the ongoing development of prosocial interpersonal skills and mastery of sufficient language and reading skills by early adulthood to be able to function as part of the social and economic mainstream. I argue that speech-language therapy has much to offer to the promotion of evidence-based early reading and writing instruction and support, given the linguistic nature of reading and the high comorbidity between language and reading difficulties and social-emotional disturbances in childhood and adolescence.
Article
Full-text available
Conceptual knowledge is central to cognitive abilities such as word comprehension. Previous neuroimaging evidence indicates that concepts are at least partly composed of perceptual and motor features that are represented in the same modality-specific brain regions involved in actual perception and action. However, it is unclear to what extent the retrieval of perceptual–motor features and the resulting engagement of modality-specific regions depend on the concurrent task. To address this issue, we measured brain activity in 40 young and healthy participants using functional magnetic resonance imaging, while they performed three different tasks—lexical decision, sound judgment, and action judgment—on words that independently varied in their association with sounds and actions. We found neural activation for sound and action features of concepts selectively when they were task-relevant in brain regions also activated during auditory and motor tasks, respectively, as well as in higher-level, multimodal regions which were recruited during both sound and action feature retrieval. For the first time, we show that not only modality-specific perceptual–motor areas but also multimodal regions are engaged in conceptual processing in a flexible, task-dependent fashion, responding selectively to task-relevant conceptual features.
Article
Full-text available
Phylogenetic, developmental, and brain-imaging studies suggest that human personality is the integrated expression of three major systems of learning and memory that regulate (1) associative conditioning, (2) intentionality, and (3) self-awareness. We have uncovered largely disjoint sets of genes regulating these dissociable learning processes in different clusters of people with (1) unregulated temperament profiles (i.e., associatively conditioned habits and emotional reactivity), (2) organized character profiles (i.e., intentional self-control of emotional conflicts and goals), and (3) creative character profiles (i.e., self-aware appraisal of values and theories), respectively. However, little is known about how these temperament and character components of personality are jointly organized and develop in an integrated manner. In three large independent genome-wide association studies from Finland, Germany, and Korea, we used a data-driven machine learning method to uncover joint phenotypic networks of temperament and character and also the genetic networks with which they are associated. We found three clusters of similar numbers of people with distinct combinations of temperament and character profiles. Their associated genetic and environmental networks were largely disjoint, and differentially related to distinct forms of learning and memory. Of the 972 genes that mapped to the three phenotypic networks, 72% were unique to a single network. The findings in the Finnish discovery sample were blindly and independently replicated in samples of Germans and Koreans. We conclude that temperament and character are integrated within three disjoint networks that regulate healthy longevity and dissociable systems of learning and memory by nearly disjoint sets of genetic and environmental influences.
Article
Longer ripples make better memories Sharp wave ripples in the hippocampus are thought to play a role in memory formation and action planning. Fernández-Ruiz et al. used multisite electrophysiological recordings combined with optogenetic activation of hippocampal pyramidal neurons in rats performing learning tasks. Learning and correct recall in spatial memory tasks were associated with extended sharp wave ripples. Artificially prolonging these ripples improved working memory performance, whereas aborting the late part of ripples decreased performance. Science , this issue p. 1082
Article
FOXP2 mutations cause a speech and language disorder, raising interest in potential roles of this gene in human evolution. A new study re-evaluates genomic variation at the human FOXP2 locus but finds no evidence of recent adaptive evolution.