ArticlePDF Available

Abstract and Figures

Biological self-organisation can be regarded as a process of spontaneous pattern formation; namely, the emergence of structures that distinguish themselves from their environment. This process can occur at nested spatial scales: from the microscopic (e.g., the emergence of cells) to the macroscopic (e.g. the emergence of organisms). In this paper, we pursue the idea that Markov blankets – that separate the internal states of a structure from external states – can self-assemble at successively higher levels of organisation. Using simulations, based on the principle of variational free energy minimisation, we show that hierarchical self-organisation emerges when the microscopic elements of an ensemble have prior (e.g., genetic) beliefs that they participate in a macroscopic Markov blanket: i.e., they can only influence – or be influenced by – a subset of other elements. Furthermore, the emergent structures look very much like those found in nature (e.g., cells or organelles), when influences are mediated by short range signalling. These simulations are offered as a proof of concept that hierarchical self-organisation of Markov blankets (into Markov blankets) can explain the self-evidencing, autopoietic behaviour of biological systems.
Self-organisation at the first level. This figure illustrates four snapshots at different times during the simulation of the (final stage of) self-organisation of an ensemble comprising sixteen 'cells', whose internal and active equations of motion describe a gradient descent on prediction error, relative to sensory states expected by each member of the ensemble. Every member is endowed with the same prior (genetic) beliefs about what they should signal and sense, depending upon their type (which has to be inferred on the basis of what they sense). These priors ultimately prescribe a point attractor for the dynamics of the ensemble. Each cell can then infer (via intracellular dynamics) its type and behave (via extracellular signalling) accordingly, while moving (via chemotaxis) to a location that fulfils its predictions about its extracellular signals. The emergent morphology of the ensemble is a cell of cells, with an internal (red) cell in the centre, surrounded by a membrane of active (green) cells in the middle, and sensory (blue) cells on the periphery. This is the spatial pattern that best fulfils the prior beliefs of all the constituent cells. Note that the cells are initially pluripotent and only acquire (i.e., infer) their (colour-coded) role in the Markov blanket, in virtue of their position and signalling with other cells as they self-organise (see for an example the internal state at time 40 and 70). This means the number of sensory, active and internal cells is not encoded in each cell's prior; rather, it is an emergent property of self-organisation under the simple prior that each cell must be a particular type of cell. Furthermore, if a cell infers that it is a particular type, then it becomes that typebecause its inference is mediated by intracellular signalling that classifies a cell as one type or another. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
… 
This figure shows the (final) results of self-organisation of an ensemble of cells, where each constituent of the ensemble is itself a local ensemble. In this example, 16 local ensembles, each composed of 16 cells, self-organise in a global ensemble. Panel A shows the evolution of the hierarchical system captured at three different moments. The colour of central circles reflects the inferred cells within each local ensembles; the peripheral circles indicate specialisation of local ensembles within the global ensemble. (colours: internal -red, active -green, sensory -blue). Note that there are no external states because the external states comprise the Markov blankets of other ensembles. The key thing to observe here is that (slow) self-organisation of local ensembles in a global Markov blanket, starting at time 1, relies on their very existence, that is, on (fast) self-organisation of cells composing each ensemble. At any temporal and spatial scale, the emergence of a Markov blanket reflects the particular independency structure, where internal cells do not influence sensory (i.e. surface) cells, in virtue of their separation by active cells. This separation induces conditional independence, because of the limited range of intracellular signals (that fall off with a Gaussian function of distance). Panel B shows the same results in an alternative format; namely, the evolution of expectations about type (i.e., differentiation) of cells within an exemplar local ensemble (left; local expectation), and of the local ensemble within the global ensemble (middle; global expectation). Notably, the identity of the local ensemble is the result (i.e. the average) of its constituent cell's beliefs about their role at the higher level. This means that a local ensemble organises in concert with the other in a global ensemble because its cellular components have communal beliefs about their role (as a local ensemble) in the global one. These cellular beliefs about global identity are represented in the left illustration of Pannel B. Here, the exemplar ensemble becomes an active state. This means that a local ensemble organises in concert with the others in a global ensemble because its cellular components have communal beliefs about their role (as a local ensemble) at the global level. These cellular 'beliefs' about global identity are represented in the left illustration of Panel B. Here, the exemplar ensemble becomes an active state. This means that all its constituents will come to infer that they participate as an active state at the global level (left; local about global). Note the differentiation on both a local and global level; while local expectations about the cells' role at the global level converge to the same type. Panel C displays the decrease in free energy of the hierarchical system as self-organisation takes place. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
… 
Content may be subject to copyright.
Journal of Theoretical Biology 486 (2020) 110 08 9
Contents lists available at ScienceDirect
Journal of Theoretical Biology
journal homepage: www.elsevier.com/locate/jtb
On Markov blankets and hierarchical self-organisation
Ensor Rafael Palacios
a
,
, Adeel Razi
a
,
b
,
c
, Thomas Parr
a
, Michael Kirchhoffd
, Karl Friston
a
a
The Wellcome Centre for Human Neuroimaging , University College London , Queen Square , London WC1N 3BG , UK
b
Monash Institute of Cognitive and Clinical Neurosciences and Monash Biomedical Imaging , Monash University , Clayton , Australia
c
Department of Electronic Engineering , NED University of Engineering and Technology , Karachi, Pakistan
d
Department of Philosophy , Faculty of Law , Humanities and the Arts , University of Wollongong , Wollongong 2500 , Australia
a r t i c l e i n f o
Article history:
Received 14 May 2019
Revised 16 November 2019
Accepted 19 November 2019
Available online 20 November 2019
Keywo rds:
Self-organisation
Markov blanket
Dynamical systems
Free energy
Active inference
a b s t r a c t
Biological self-organisation can be regarded as a process of spontaneous pattern formation; namely, the
emergence of structures that distinguish themselves from their environment. This process can occur at
nested spatial scales: from the microscopic (e.g., the emergence of cells) to the macroscopic (e.g. the
emergence of organisms). In this paper, we pursue the idea that Markov blankets – that separate the in-
ternal states of a structure from external states –can self-assemble at successively higher levels of organ-
isation. Using simulations, based on the principle of variational free energy minimisation, we show that
hierarchical self-organisation emerges when the microscopic elements of an ensemble have prior (e.g.,
genetic) beliefs that they participate in a macroscopic Markov blanket: i.e., they can only influence –or
be influenced by –a subset of other elements. Furthermore, the emergent structures look very much like
those found in nature (e.g., cells or organelles), when influences are mediated by short range signalling.
These simulations are offered as a proof of concept that hierarchical self-organisation of Markov blankets
(into Markov blankets) can explain the self-evidencing, autopoietic behaviour of biological systems.
©2019 Elsevier Ltd. All rights reserved.
1. Introduction
There is growing interest in the role of Markov blankets and
associated partitions in understanding self-organisation– and the
accompanying self-evidencing that arises from Bayesian mechan-
ics ( Friston, 2019 ). A key aspect of this self-organisation is the hi-
erarchical decomposition of Markov blankets of Markov blankets.
This notion has emerged in the literature at several levels; rang-
ing from conceptual analyses in the context of ethology and evolu-
tion ( Allen, 2018 ; Clark, 2017 ; Friston et al., 2015 ; Kirchhoff et al.,
2018 ; Pellet and Elisseeff, 2008 ; Ramstead et al., 2018 ), through to
the emergence of multicellular organisms ( Kuchling et al., 2019 )
to the implicit renormalisation group that furnishes a particu-
lar perspective on (quantum, statistical and classical) mechanics
( Friston, 2019 ; Friston et al., 2014 ).
However, despite the potential importance of these conceptual
and mathematical analyses, no one has yet provided a proof of
principle that Markov blankets of Markov blankets can emerge us-
ing numerical analyses. In this paper, we report such a proof of
Correspondence author at: The Wellcome Centre for Human Neuroimaging, In-
stitute of Neurology, UCL, Queen Square, London WC1N 3AR, UK.
E-mail address: ensorrafael.palacios@bristol.ac.uk (E.R. Palacios).
principle by illustrating the emergence of blankets of blankets un-
der the unitary principle of (variational) free energy minimisation.
We frame this in terms of self- organisation or pattern forma-
tion in cells, to emphasise the simplicity and biological plausibil-
ity of the underlying dynamics –although this framing is more
by analogy than any detailed consideration of inter-and intracellu-
lar communication. Our primary aim was to show that hierarchal
compositions of Markov blankets of Markov blankets can emerge
from gradient flows on variational free energy, under an appropri-
ate generative model.
In brief, the notion of a Markov blanket allows one to define
any system or structure in a way that distinguishes it from the en-
vironment or milieu in which it resides. The Markov blanket plays
the role of a statistical boundary that allows one to talk about
a system per se ( Alcocer-Cuarón et al., 2014 ; Schrödinger, 1944 ).
Such structures can be described at multiple scales, from macro-
molecules such as ribonucleic acids, through organelles to organs
and organisms and even beyond. A Markov blanket is a set of
states that separates the internal or intrinsic states of a struc-
ture from extrinsic or external states. Importantly, when interac-
tions between states are spatially dependent, as is the case for
states pertaining to the physical description of biological organ-
isms, this separation can be spatial in nature. Consequently, in
this setting, a Markov blanket describes a spatial boundary. More-
over, this boundary comprises sensory and active states, as is the
https://doi.org/10.1016/j.jtbi.2019.110089
0022-5193/© 2019 Elsevier Ltd. All rights reserved.
2 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
case for biological systems, like membrane receptors and the cy-
toskeleton underneath them. In other words, a (biological) physi-
cal boundary is a Markov blanket (with sensory and active states),
where dependencies between states are determined by location in
space. And obvious example here would be the membranes that
surround organelles and cells. The crucial aspect of a Markov blan-
ket is that it provides a formal definition of what it means for the
internal states of a structure to exist in a way that is condition-
ally independent of its external states. This definition precludes
a (spatially dependent) direct coupling between internal and ex-
ternal states, such that they only influence each other vicariously
through the Markov blanket ( Friston 2013 ).
Markov blankets play a central role in several disciplines. For
example, in Bayesian statistics and machine learning, they organ-
ise the architecture of message passing in neuronal networks and,
indeed, the way we implement many statistical tests ( Pellet and
Elisseeff, 2008 ). In control theory, they underlie the circular in-
teractions between system and environment ( Baltieri and Buck-
ley, 2018 ). In theoretical biology, they are the cornerstone of varia-
tional approaches to self-organisation under the free energy princi-
ple. These variational treatments have been applied at many levels;
ranging from variational ethology and evolution (Ramstead, Bad-
cock et al. 2017, Constant, Ramstead et al., 2018 ), through to self
organisation, and adaptive behaviour in neuroscience ( Friston 2010 ,
Limanowski and Blankenburg 2013), down to morphogenesis and
pattern formation at the cellular level (Kiebel and Friston 2011,
Friston et al., 2015 ), and up to mental manipulation and imagi-
nation ( Hohwy, 2016 ; Yufi k and Friston, 2016 ). In this sense, the
Markov blanket is a scale free concept that underwrites the dy-
namics of all self-organising systems, at some level.
Markov blankets are not necessarily spatially extensive mem-
branes; they are just a set of states that separates internal and
external states. For example, the brain’s Markov blanket might in-
clude all its sensory receptors and neuromuscular junctions. At the
cellular level, Markov blankets can be associated with membranes
that surround cells and intracellular organelles, or to the surface
of membrane-less organelles where a liquid-liquid phase separa-
tion takes place ( Mitrea and Kriwacki, 2016 ). Notably, any spatially
dependent interactions between system and environment can exist
only in virtue of the permissive role of a statistical boundary; that
is, a Markov blanket. However, this does not constrain such inter-
actions within the physical boundary, such in the case of channels
allowing ions to cross the cellular membranes or the production of
heat by warm-blooded animals ( Virgo, 2011 ; Virgo et al., 2011 ).
Previous work has already addressed the inextricable link be-
tween Markov blankets and living organisms. In particular, any bi-
ological self-organising system can be viewed as generating and
maintaining Markov blankets at multiple scales ( Friston, 2013 ).
Consequently, morphogenesis at any particular level of description
becomes the process of constructing a Markov blanket with a par-
ticular structure, as exemplified by the organisation of an ensem-
ble of undifferentiated cells into a differentiated target morphol-
ogy ( Friston et al., 2015 ; Kuchling et al., 2019 ). In a companion
paper, we have articulated the implications that the emergence
of nested Markov blankets have for our understanding and inter-
pretation of an organism’s dynamics, with the important consid-
eration that Markov blankets do not have to be co-extensive with
the biophysical boundaries of an organism ( Kirchhoff et al., 2018 ).
These arguments are in turn tightly connected with considerations
about a system’s cognitive domain that exuberates spatial bound-
aries, analogously to the cognitive domain of a ‘glinder’ (a set of On
states surrounded by Off states) in the Game of Life ( Beer, 2014 ).
In the present paper, we ask how an ensemble of constitutive
parts, endowed with Markov blankets, could self-organise to cre-
ate a Markov blanket at a higher scale; namely, a Markov blanket
of Markov blankets. In particular, we focus on the minimal set of
prior beliefs, a hierarchically organised system must express, and
how these beliefs at different scales are linked.
Our basic conclusion is that a single principle is sufficient to
explain the emergence of hierarchical structure; namely the vari-
ational free energy principle. This does not imply that hierarchi-
cal organisation is an emergent feature of any coupled random dy-
namical systems; rather, with the right sort of generative model,
an ensemble of Markov blankets (e.g., cells) can self-assemble a
Markov blanket around the ensemble (e.g., an organ). A generative
model here refers to a probabilistic model of how external states
influence the Markov blanket that is implicit in the dynamics of in-
ternal states. In the current setting, having the right sort of genera-
tive model can be regarded as having the right sort of prior (prob-
abilistic) beliefs that are endowed by evolution. In what follows,
we will use simulations to provide a numerical proof of principle
that minimising variational free energy (under a suitable genera-
tive model) leads to hierarchical self-organisation. Throughout the
paper “free energy” will refer to variational free energy. While this
is closely related to the thermodynamic concept of free energy (see
Friston, 2019 for details), variational free energy is an informational
quantity that provides an upper bound on surprise (a.k.a., surprisal
or the negative log probability of sensory data).
This paper is organised as follows: first, we review the concept
of a Markov blanket in biological systems and draw the link be-
tween statistical independence and physical boundaries. By doing
so, we provide an intuition on the chief role that Markov blan-
kets have in self-organisation within the free energy principle. We
then consider the implications of the existence of a Markov blanket
for the behaviour of random dynamical systems obeying the varia-
tional free energy minimisation principle. A technical treatment of
Markov blankets in the emergence of physical structures and asso-
ciated (quantum, stochastic, and classical) mechanics can be found
in Friston, 2019 ). This section emphasises the autopoietic nature of
systems that ( Maturana, 1974 ), through the dynamics of their in-
ternal and active states, resist a natural tendency to disorder. In
the final sections, we describe simulations of self-organisation at
two levels: these furnish a proof of concept for self-organisation
into Markov blankets and the hierarchical formation of blankets of
blankets, respectively. We conclude with a discussion of how this
treatment relates to other characterisations of biological self organ-
isation.
2. Markov blankets and variational treatments of self
organisation
Biological systems generally segregate themselves from their
environment to form boundaries, which define the distinction be-
tween what is internal to the system and what is external ( Alcocer-
Cuarón et al., 2014 ; Kauffman, 1993 ; Kelso, 1995 ; Nicolis and Pri-
gogine, 1977 ; Schrödinger, 194 4 ). In this paper, these boundaries
are formalised in terms of Markov blankets; namely, statistical
boundaries that separate internal and external states (e.g., a cellu-
lar membrane separating intracellular and extracellular dynamics).
In particular, spatial boundaries are an instantiation of the statis-
tical independencies, when physical state interactions are spatially
dependent, as is often the case for biological systems. This separa-
tion is a fundamental property of self-organising systems, because
their very existence implies the presence of a boundary that dis-
tinguishes inside (i.e., self) from the outside (i.e., environment).
Living systems maintain the integrity of their boundaries (i.e.
Markov blankets), in the face of an ever-changing environment.
This means that life has evolved mechanisms for the gener-
ation, maintenance, and repair of Markov blankets. A system
endowed with such mechanisms connotes an autopoietic organ-
isation that autonomously assembles its own components; in
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 3
particular its boundaries, ( Maturana, 1974 ; Varela et al., 1974 ). This
autonomy does not imply isolation from the environment, which
–on a thermodynamic account –is needed to provide energy
( Whitesides, 2002 ). Therefore, living organisms are operationally
closed, while presenting as thermodynamically open. The interac-
tion between system and environment is then mediated by the
boundary. Notably, this coupling is non-trivial, in that the organ-
ism must actively realise an ‘informational control’ of the environ-
ment (i.e., possess a teleology), by filtering, canalising and cate-
gorising signals that carry information about their external causes
( Auletta, 2010 ). This implies that the system does not merely re-
spond to sensory states, but reacts to them to infer some (use-
ful) information about the world. At the same time, the boundaries
must contain machinery that allows the system to act on external
states. In short, definitive borders are essential for living systems,
as any dynamics that happens within and between systems can
only take place in virtue of their existence ( Friston, 2013 ).
Living organisms are complex systems, denoted by non-
linear interactions between multiple hierarchically arranged and
nested components ( Hilgetag et al., 20 0 0 ; Kauffman, 1995 , 1993 ;
Kirchhoff et al., 2018 ). As such, characterising how they self-
organise requires not only an understanding of how single com-
ponents couple to each other, but also how microscopic and
macroscopic levels interact. This invokes the notion of top-down
influences on the low level dynamics ( Ellis et al., 2012 ) and
vice versa.
Self-organisation has been addressed extensively in theoretical
biology using tools from statistical thermodynamics and informa-
tion theory to explain how biological systems resist a natural ten-
dency to disorder. This holdout is an apparent violation of the sec-
ond law of thermodynamics, or at least standard descriptions of
it ( Evans and Searles, 2002 ; Seifert, 2012 ). A more recent line of
work within this framework ( Friston 2013 ) sees living organisms
as placing an upper (free energy) bound on their self-information
(i.e., negative log likelihood of sensed states). This imperative is
motivated by the fact that biological systems have to maintain sen-
sory states within physiological bounds. This means the Shannon
entropy (i.e., dispersion) of sensory states is necessarily bounded.
Shannon entropy is the path or time average of self-information;
also known as surprisal or surprise . In short, self-organisation can
be regarded as synonymous with systems that place an upper
bound on their self-information or surprise. In this variational for-
mulation of self-organisation –that emphasises its inferential as-
pect – living organisms are understood as placing a (free energy)
bound on surprise.
These arguments rest upon ergodicity assumptions (implicit in
the fact that the sorts of systems we are interested in have char-
acteristic measures that persist over time). Ergodicity implies that,
over a sufficiently long period, the time spent in a particular loca-
tion of state-space is equal to the probability that the system will
be found at that location when sampled at random ( Friston, 2013 ).
If this probability measure is finite, it means that any system will
revisit all its states (or their neighbourhoods) time and time again.
It is this peculiar behaviour that underwrites self-organisation;
namely, the existence of an attracting set of states that endow liv-
ing systems with characteristic states that they visit time and time
again.
The existence of an attracting set means that one can interpret
the long-term average of surprise as the entropy of the systems
sensory states. Crucially, because surprise is (negative) Bayesian
model evidence, minimising free energy – defined as an upper
bound on surprise –is the same as maximising a lower bound
on the evidence for an implicit model of how sensory states are
generated. In other words, the system can be regarded as a (gen-
erative) model of its environment ( Conant and Ashby, 1970 ), and
will look as if it is gathering evidence for its own existence. This
has been called self-evidencing ( Hohwy, 2016 ). It follows that –by
minimising free energy – biological systems place an upper bound
to the entropy of their sensations by inferring their causes; this is
also known as active inference ( Friston et al., 2010 ), and is closely
related to formulations of the perception-action cycle in the life
sciences, like embodied cognition ( Clark, 2009 ), artificial intelli-
gence ( Ay et al., 2008 ), and cognitive neuroscience ( Fuster, 2004 ).
In short, self -organisation entails the bounding of self -information
that can be cast as self -evidencing.
In what follows, we use mathematical and numerical analyses
that build upon a free energy formulation of pattern formation
( Friston et al., 2015 ). We start with subsystems whose dynamics
possess a Markov blanket as an attracting set. We then integrate
the system until it self-organises into a stable configuration. Subse-
quently, we extend the simulation to consider hierarchical systems;
namely, configurations of configurations (i.e., blankets of blankets)
that could, in principle, be extended indefinitely.
1 These simula-
tions were used to test the following hypothesis: if the main-
tenance of Markov blankets can be cast as self-evidencing, then
self-organisation should be an emergent property ( Kirchhoff et al.,
2018 ) of subsystems that ‘believe’
2 they participate in –or are en-
closed by –a Markov blanket. Because Markov blankets are de-
fined by conditional independencies, the requisite beliefs can be
specified simply, in terms of communication or signalling between
subsystems. In other words, it should be possible to reproduce hi-
erarchical self-organisation by equipping subsystems with beliefs
about how they influence –and are influenced by –other subsys-
tems. The next section considers the formal basis of our simula-
tions, based upon random dynamical systems and their probability
density dynamics.
3. The Markov blanket partition
This section associates random dynamical systems with living
organisms, where the states of a system stand for its internal states
(e.g., intracellular states), its blanket states (e.g., receptors on a cell
membrane and the actin filaments of the cytoskeleton) and exter-
nal states (e.g., extracellular milieu). The systems under considera-
tion are complex (i.e., non-linear and hierarchical) and organise in-
dependently of any applied or external gradient: we will see that
such systems exhibit a process of pattern generation that lead to
definitive boundaries (i.e. Markov blankets), defining internal states
and their relationship with external states.
The notion of a Markov blanket was originally proposed in the
context of Bayesian networks or graphs ( Pearl, 19 88 ), where it
refers to the parents of the set of states (that influence it), its chil-
dren (that are influenced by it), and the children’s parents. The
Markov blanket defines the conditional independencies between a
set of states (the system) and a second set of states (the en-
vironment). This concept can be translated into a biological set-
ting: for example, the intracellular milieu of a cell represents the
internal states and the plasmalemma corresponds to the Markov
blanket, through which communication between intracellular and
extracellular states is mediated ( Auletta, 2013 ; Friston, 2013 ). Cru-
cially, the Markov blanket can be decomposed into sensory S and
active A states, which are and are not children of the external
1 We thoroughly explore the implications of hierarchical organisation of Markov
blankets for description of living organisms in the more philosophically motivated
paper by Kirchhoff et al. (2018 ). On the other hand, from an evolutionary perspec-
tive, these arguments encourage new questions and interpretations, like the identity
of the very first
prior or major evolutionary transitions of priors in unicellular and
multicellular organisms (see discussion).
2 This term is used as a synonym of prior expectations in the context of Bayesian
inference.
4 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
Tabl e 1
Definition of the tuple
, , S , A , M , p , q , P or self-evidencing.
A sample space from which random fluctuations ω are drawn
External states f
: ×A×f
R
– states of the world (e.g. extracellular milieu) that depend on themselves and active states
Sensory states f
S
:
×A ×f
S
S R
S
–states of sensors (e.g. receptor activity) but depend upon external and active states
Active states f
A
: S ×M ×f
A
A R
A
–states of action on the world (e.g. exocytosis of signalling molecules) that depend upon sensory and internal states
Internal states f
M
: S ×M ×f
M
M R
M
– the internal states of a system (e.g. genetic transcription) that depend on themselves and sensory states
Generative model p (
ψ, S , A , μ| m ) –a probability density function over external, sensory, active and internal states for a system denoted by m
Variational density q (
ψ| μ) –a probability density function over external states parameterised by internal states
Fig. 1. System comprising interacting states. In (A) spatially-independent coupling among states is mediated by long-range interactions. In the first (left) panel all states
influence each other, and are therefore indistinguishable. In (B) only short-range interactions are allowed; thus coupling among states is spatially dependent. However, two
sets of states exist only in virtue of their spatial separation: i.e., they are effectively independent. In (C), internal (red) and external (blue) states can be distinguished in
virtue of the separation mediated by a third set; namely, the Markov blanket, composed of sensory (yellow) and active (orange) states. External states can influence internal
states only by acting on sensory states. On the other hand, internal states couple back to external states through active states. Note that in this scenario, active states are
shielded from external states by sensory states –and sensory states are shielded from internal states by active states. This is the simplest dependency structure leading to a
Markov blanket. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
states, respectively. Thus, the existence of a Markov blanket S ×A
induces a partition of states in x X = ×S ×A ×; external
states act on sensory states, which influence, but are not influ-
enced by internal states. Internal states couple back through active
states, which influence but are not influenced by external states
( Table 1 ). This partition ensures a statistical separation between in-
ternal and external states in the sense they are independent, when
conditioned on the Markov blanket.
How this statistical concept can be translated into a biological
setting, and why is its presence so important? We start by con-
sidering a system in which long-range (e.g., electromagnetic) in-
teractions are possible, and states’ identity rests upon their cou-
pling. Here, every state interacts with all others, irrespective of
its spatial position; every state is therefore indistinguishable from
the remainder, because the fully interconnected nature of the sys-
tem precludes any statistical separation of one state from another
( Fig. 1 a). To engender statistical structure (i.e., an identity), cou-
pling has to be limited via interactions restricted in space. One
possible scenario displays two sets of states located far enough
for them to be statistically independent: the state of one set does
not influence the other and vice versa ( Fig. 1 b). However, in this
case interactions between sets are also precluded. When consid-
ering biological systems and their environment, this scenario be-
comes unrealistic, given the definition of biological organisms as
open systems ( Whitesides, 2002 ). Therefore, two sets of states can
be associated in a meaningful way to a biological organism’s in-
trinsic (i.e. internal) and extrinsic (i.e. external) states only when
a Markov blanket exists, which defines conditional or spatial in-
dependencies (i.e. identities) and interactions between the two
states. Notably, it is the restriction in space of interactions that
justifies the association between statistical and spatial indepen-
dence, thus between Markov blankets and spatial boundaries. This
is the minimal thus most general description of a biological self-
organising organism possible ( Fig. 1 c). Notably, interactions oc-
curs in virtue of the partition of the blanket states into sensory
and active states, mediating the vicarious influence of external on
internal states and the influence of internal on external states,
respectively.
In reality, segregation emerges in the presence of coupling. In
other words, a subsystem differentiates itself from the environ-
ment, but remains (statistically or energetically) coupled to it. This
is possible when two sets of states are conditionally independent
not just because of their spatial separation, but in virtue of a third
set; namely, blanket states ( Fig. 1 c). These blanket states comprise
sensory and active states, mediating the vicarious influence of ex-
ternal on internal states and the influence of internal on external
states, respectively. This concludes our description of a minimal
partition that enables a meaningful separation of internal and ex-
ternal states.
How can the concept of Markov blanket expand when consid-
ering the hierarchical structure of biological systems? Let us group
internal and blanket states into a single (macroscopic vector) state.
If this macroscopic state participates in some meaningful structure,
a macroscopic Markov blanket has to emerge, whose sensory and
active states –and the internal states insulated within –will each
be composed of microscopic Markov blankets. Hence, the forma-
tion of Markov blankets at any level of hierarchical organisation is
intimately linked to the maintenance of Markov blankets ‘all the
way down’ ( Fig. 2 ). On this view, self-organisation is a recursive
process of boundary formation that spans all levels of hierarchi-
cal organisation. Later, we provide a proof of concept for this ar-
gument by simulating the hierarchical self-organisation of Markov
blankets.
In summary, self-organisation has to feature the emergence
of boundaries that define an internal state space, separating it
from external states, while allowing for vicarious coupling. It fol-
lows that hierarchical self-organisation requires the emergence
of Markov blankets of Markov blankets. In the next section,
we turn to the nature of the dynamics that underwrite this
emergence.
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 5
Fig. 2. Markov blanket of Markov blankets. We now broaden the perspective, and
consider each Markov blanket (and internal states) as a macroscopic state. Again,
given short-range interactions, the only way for a system to exist at this macro-
scopic level is to be separated from its environment by a Markov blanket. The hier-
archical nature of this system is induced by (macroscopic) Markov blankets of (mi-
croscopic) Markov blankets, each of them insulating its respective internal states.
(For interpretation of the references to color in this figure legend, the reader is re-
ferred to the web version of this article.)
4. Dynamical systems, self-organisation and self-evidencing
We will be dealing with random dynamical systems expressed
as Langevin equations of the following form:
˙
x = f
(
x
)
+ ω
f
(
x
)
=
f
ψ
(
ψ, s, a
)
f
s
(
ψ, s, a
)
f
a
(
s, a, μ)
f
μ(
s, a, μ)
(1)
This describes the dynamics of a system with a Markov blanket
in terms of the flow f ( x ) of its states and random fluctuations ω.
The flow of external ψ , sensory s S , active a A and internal
states μ M in the second equality, conforms to the dependen-
cies implied by a Markov blanket (see Table 1 ). External states can
only be influenced by internal states through their Markov blan-
ket, and are therefore called hidden states, because they are hid-
den behind the Markov blanket. In the specific setting of biological
systems, the partition in Eq. (1) relies on the spatial location of
states, and identifies sensory and active states as components of
spatial boundaries.
An alternative formulation of Eq. (1) is in terms of a Lagrangian,
which allows us to describe the system’s dynamics in terms of a
gradient flow using the Helmholtz decomposition. This rests upon
ergodic assumptions implied by the existence of an attracting set,
called a pullback or random global attractor. Following Crauel et al.
(1999 ), Crauel and Flandoli (1994 ) and Friston (2013 ), one can ex-
press the flow of states in terms of a divergence-free component
and a curl-free descent on a Lagrangian L ( x ) that corresponds to
the self-information or surprise associated with any state. This
rests upon ergodic assumptions implying the existence of an at-
tracting set (conditioned of the model) in the state-space, called
a pullback or random attractor ( Crauel et al., 1999 ; Crauel and
Flandoli, 199 4 ), and an associated probability density, the
ergodic density.
f
(
x
)
=
(
Q )
L
(
x
)
L
(
x
)
= ln p
(
x | m
) (2)
Here, diffusion tensor is half the covariance of the ran-
dom fluctuations, and Q is an antisymmetric matrix that satis-
fies Q(x ) = Q (x )
T
. The equality p( x | m ) = exp ( L ( x ) ) is the solu-
tion of the Fokker-Planck equation describing the density dynamics
( Friston and Ao, 2012 ; Frank, 2004 ), where m denotes a particular
system or model (see Variational Free energy section). This ergodic
or nonequilibrium steady-state density is the probability density at
which its rate of change is zero. Eq. (2) means the states of a sys-
tem m at nonequilibrium steady-state are performing a gradient
ascent on the ergodic density. This is revealing, because it shows
that the system’s flow counters the dispersive effects of random
fluctuations –by flowing towards the attracting states.
f
(
x
)
=
(
Q
)
· ln p
(
x | m
) (3)
This gradient flow formulation also applies to the flow of inter-
nal and active states
f
a
(
s, a, μ)
=
(
Q
)
·
a
ln p
(
s, a, μ| m
)
f
μ(
s, a, μ)
=
(
Q
)
·
μln p
(
s, a, μ| m
)
(4)
These equations are the homologues of (2) for the internal and
active states, whose flow performs a gradient ascent on the er-
godic density over the internal states and their Markov blanket
(note that this density does not involve the external states, in
virtue of the dependencies in Eq. (1) ). In short, the internal and
blanket states that constitute a subsystem are autopoietic, because
their (nonequilibrium steady-state or ergodic) probability density
is maintained by the flow of the subsystem’s internal and ac-
tive states. In the context of spatially dependent interactions, the
flows partition expressed in Eq. (4) , afforded by the Markov blan-
ket formalism, relies on the spatial separation of states by a spatial
boundary.
5. The variational free energy formulation
The flow of the states therefore describes a gradient ascent on
the ergodic density. Analogously, in the setting of the stochastic
thermodynamics, the system will minimise its thermodynamic free
energy ( Seifert, 2012 ). The link between thermodynamic and vari-
ational free energy rests upon associating the amplitude of ran-
dom fluctuations on the motion of states with temperature –and
equipping them with particular units through the use of Boltz-
mann’s constant ( Friston, 2019 ; Seifert, 2012 ; Sekimoto, 1998 ). This
means that the changes in variational free energy inherent in
belief updating can be linked directly to changes in thermody-
namic free energy in a way that is consistent with the Jarzynski
equality ( Jarzynski, 1997 ) and Landauer’s principle ( Bennett, 2003 ;
Landauer, 1961 ). Please see Friston (2019 ) for a fuller discussion
and England (2013 ) and Parrondo et al. (2015 ) for a related per-
spective. Although the ergodic density exists, it is not evaluated
explicitly by the system, because this would require access to ex-
ternal states that are hidden behind the Markov blanket. However,
it is possible to use an alternative formulation that furnishes a de-
scription of the flow in terms of a gradient descent on a variational
free energy associated with a generative model of the system in
question ( Friston et al., 2015 ):
f
μ(
s, a, μ)
=
Q
μμ
μF
f
a
(
s, a, μ)
=
(
Q
a
a
)
a
F
F
(
s, a, μ)
= E
q
L
(
x
)
Hq
(
ψ
)
| μ
(5)
Here, the flow of internal and active states has been expressed
as a gradient descent on variational free energy, which is a func-
tion of states that are available to the system. This follows because
6 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
free energy depends on a variational density q ( ψ| μ) over external
states that is parameterised by internal states, and a generative
model p ( ψ, s, a, μ| m ), which is the system itself, where m denotes
the particular system ( Friston et al., 2015 ).
Under this formulation of density dynamics, internal states will
appear to infer external states: the third equality expresses free
energy as the self-information (i.e., negative log evidence for the
model) expected under the variational density minus the entropy
of the variational density. This means that internal and active
states maximise the joint probability density –expected under
the variational density –over states conditioned on the system or
model in question. Moreover, internal states will reduce free en-
ergy by parameterising a variational density over external states
with maximum entropy; in accordance with Jaynes’ principle of
maximum entropy ( MacKay, 2003 ). Although not our focus here,
when variational free energy is minimised, the variational density
becomes the posterior density over hidden or external states, given
blanket states. In this sense, the internal states encode posterior
‘beliefs’ about external states; despite never seeing them directly.
Crucially, the free energy formulation allows us to prescribe the
ergodic density in terms of a generative model. In other words, we
can write down a generative model and derive the dynamics ac-
cording to Eq. (5) as a gradient descent on the free energy equiva-
lent of surprise. In what follows, we will simulate self-organisation
by specifying a model about the causes of sensory states –and
by specifying the environmental dynamics generating those sen-
sations. This means we need to write down the generative model
p ( ψ, s, a, μ| m ) of the system in terms of the dynamics f
ψ
( ψ, s, a )
and f
s
( ψ, s, a ) of the environment and how sensory states are gen-
erated. Interestingly, the generative process and model do not have
to be isomorphic: the generative model has only to approximate
the generative process to minimise free energy ( Baltieri and Buck-
ley, 2018 ). The generative model is usually expressed in terms of
random differential equations and nonlinear functions with a hier-
archical form. In this paper, we will omit these dynamics for sim-
plicity, and specify the relationship between external and sensory
states through the following (static) nonlinear functions:
s = g
(
1
)
ψ
(
1
)
+ ω
(
1
)
ψ
(
1
)
= g
(
2
)
ψ
(
2
)
+ ω
(
2
)
.
.
.
(6)
Under Gaussian assumptions about random fluctuations ω,
Eq. (5) prescribes the likelihood and priors defining the generative
model or Lagrangian:
p
(
ψ, s | m
)
= p
s | ψ
(
1
)
p
ψ
(
1
)
| ψ
(
2
)
p
s | ψ
(
1
)
= N
g
(
1
)
ψ
(
1
)
, (
1
)
p
ψ
(
1
)
| ψ
(
2
)
= N
g
(
2
)
ψ
(
2
)
, (
2
)
.
.
.
(7)
Here, ( i ) corresponds to the precision or inverse variance of
the random fluctuations. This allows us to completely specify the
generative model in terms of beliefs about how sensations are gen-
erated and priors about hidden states. The key question we address
in the next section is: what are the right priors that enable the
emergence of Markov blankets at a higher macroscopic level –that
would enable us to interpret the ensuing macroscopic dynamics in
terms of the self-evidencing above.
In the simulations of subsequent sections, we integrate
Eq. (5) using the Matlab routine spm_ADEM.m in the SPM
open source academic software. This generalised Bayesian filtering
scheme uses the Laplace assumption; i.e., the assumption that the
variational density has a Gaussian form. The use of a Bayesian fil-
tering scheme follows because the variational density q ( ψ| μ) over
external states approximates the posterior density p ( ψ| s, a, μ):
please see Friston (2014; 2010) for details. In summary, one can
use standard Bayesian filtering to simulate self-organisation. This
allows one to specify the form of the ergodic or nonequilibrium
steady-state density in terms of the priors of a generative model.
The question now is: what sort of priors leads to hierarchical self-
organisation?
6. Self-organisation of an ensemble
In what follows, we present two sets of simulations. The first
considers the self-organisation of an ensemble of synthetic cells,
where each cell possesses its own Markov blanket. The second
simulation considers ensembles of ensembles to illustrate hierar-
chical self-organisation; namely, the self-assembly of Markov blan-
kets of Markov blankets. Crucially, these simulations use simple
generative models, embodying the prior ‘belief’ that each member
can play the role of an internal, active or sensory state within the
ensemble. In other words, Markov blankets at one level of organ-
isation possess prior beliefs there is a Markov blanket partition at
the level above. However, each cell has no prior belief about its
particular role in the higher level Markov blanket –or the form
and composition of this blanket. These elementary priors are easy
to specify because each role just depends upon the influences each
member of the ensemble can or cannot exert on the others. This
means, the only hidden state each member needs to infer is which
role it plays at the higher level. We will see that this minimal set
of prior beliefs (and subsequent self-evidencing) results in the for-
mation of Markov blankets within the ensemble. The ensuing self-
similar organisation can, in principle, be extended to any number
of hierarchical levels. We illustrate this kind of hierarchical self or-
ganisation using 16 cells, each with their own Markov blanket, that
organise into a cellular group or assembly, with its own Markov
blanket. We then consider an ensemble of ensembles that organ-
ises itself into a little organ encompassed in another Markov blan-
ket.
The first simulation illustrates the self-organisation of an en-
semble. Each cell interacts with other cells; in a process that even-
tually leads to a stable configuration with a boundary separating
internal cells from their external milieu. This simulation draws on
previous work that interest morphogenesis ( Friston et al., 2015 ).
In this setting, self-organisation was simulated by minimising the
variational free energy of each cell until they attained a prescribed
morphology. This morphology was achieved through spatially de-
pendent (e.g. chemical) signalling –so that every cell sensed every
other cell in a way that was consistent with their generative mod-
els. The morphology was inscribed in beliefs common to all cells,
about cell identity, sensation and secretion. Each cell was inter-
preted as a Markov blanket surrounding internal states: the action
(active states) of a cell was the cause (i.e., external states) of the
sensations (i.e., sensory states) of the remaining cells. At the be-
ginning of pattern-formation, cells were undifferentiated, because
they were uncertain about their identity in the target morphology.
As self-organisation unfolded, each cell inferred a unique identity,
location and what they should sense at that location. When every
cell was in the right place, these inferences were fulfilled; thereby
minimising the free energy (i.e., self information or surprise) of ev-
ery cell.
In more detail, this inference –in analogy to intracellular cas-
cade signalling and epigenetic mechanisms –was driven by the
minimisation of free energy. By generating identity-dependent pre-
dictions (e.g. genetic and epigenetic expression) about sensations,
every cell moved around and produced extracellular signals un-
til its predictions were confirmed. Predictions about sensations
caused by other cells (e.g. extracellular signalling) and its own ac-
tion (e.g. secretion and position) were constrained by prior beliefs
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 7
about the role of each cell in the target morphology. These prior
beliefs were the same for every cell (c.f., pluripotential or stem
cells). In other words, based on its identity, each cell had partic-
ular expectations about its sensory states. Because sensations were
caused by other cells, surprise could only be minimised when ev-
ery member of the ensemble had inferred a unique role within the
ensemble. In short, priors established a point attractor for the en-
semble dynamics, in terms of a free energy minimum, leading to
differentiation and self organisation to a target morphology.
In the present work, we use the same strategy: we simulate
self-organisation of an ensemble of cells, coupled through spatially
decaying (e.g. chemical) signals. However, here, there is no target
morphology –only the prior that every cell will play the role of
an internal, sensory, or active cell, depending upon what it senses.
In other words, the priors embody the conditional independencies
implied by the existence of a Markov blanket; in the form of in-
tracellular and extracellular signalling between three cell types. As
the external states of each cell are the active states of other cells,
the system organises in a pattern that enables each cell to predict
signals from its companions as precisely as possible.
From the perspective of the ensemble there are no external
states. This is an important point, as self-organisation is by def-
inition autodidactic: it does not require coupling with an exter-
nal environment. The ensuing process leads to a spatial pattern,
wherein components of the system are organised in a predictable
fashion with respect to each other. Such a pattern is inscribed in
the (e.g., genetically encoded) prior expectations about the sorts
of signalling a cell should expect to participate in. More precisely,
priors are over parameters that specify the form of the generative
model, which shapes the free energy landscape, thus defining the
attracting states towards which the dynamics of the ensemble con-
verge ( Friston et al., 2015 ).
We now describe our simulation setup. The system comprised
sixteen pluripotential cells, which can become one of three types
of cells at the next hierarchical level; namely, internal, active and
sensory cells. Each cell type secretes a unique extracellular signal
and communicates according to the conditional independencies re-
quired by a Markov blanket (see Table 2 ). The external states of
each cell comprised its location ψ
x
R
2 and the chemical signals
ψ
y
R
3 released. This can be expressed as:
ψ =
ψ
x
ψ
y
=
a
x
a
y
(8)
Active states a
x and a
y (e.g., endoskeleton and secretory appa-
ratus respectively) have an immediate effect on external states;
hence the identity mapping. This simplifying assumption means
we are ignoring time lags (and attenuation), and implies that there
is no environment, other than elements of the ensemble contribut-
ing to the external states. Its sensory states are the sensed intra-
cellular (produced by itself) and extracellular (produced by other
cells) signals. The latter is a function of distance, assuming signal
concentration decreases exponentially over space. This can be ex-
pressed as:
s =
s
y
s
α=
ψ
y
α(
ψ
x
, ψ
y
)
+ ω (9)
Here, the sensory noise ω had a high precision (inverse vari-
ance) of exp(16). The sensed extracellular signals are returned by
the function α( ψ
x
, ψ
y
), which models the spatial decay of signals,
where the extracellular sensations of the i th cell are given by
s
i
= αi
ψ
i
, ψ
j
=
j
exp
ψ
i
x
ψ
j
x
·ψ
j
y
(10)
Here, j indexes all cells other than the i th cell. Each cell gener-
ates predictions based on the same generative model, which spec-
ifies the mapping from hidden states –namely, the type of the cell
ψ
i
–to sensations. The type is then the only hidden state that
the cells must infer. This inference is parameterised as an expected
probability by their internal states μi
. Based on beliefs about its
type, each cell then generates predictions about intracellular and
extracellular sensations:
g
(
μi
)
=
p
y
p
α·σ(
μi
)
σ(
μi
)
=
exp
(
μi
)
i
exp
(
μi
)
(11)
Here, p
αand p
y are prior beliefs about secretion and sensation
given the type of cell (see Table 2 ), generating sensory predictions,
according to the generative model g ( μi
) (see Eq. (6) ), while σ( μi
)
is a soft-max function that returns expectations about the cells
type. The resulting dynamics of internal and active states of each
cell can be expressed as follows:
f
μ(
˜
s ,
˜
a , ˜ μ)
= ( Q
μμ)
μF = D ˜ μ
˜ μ˜ ε ·(
1
)
˜ ε (
2
)
˜ μ
f
a
(
˜
s ,
˜
a , ˜ μ)
= ( Q
a
a
)
˜
a
F =
˜
a
˜
s ·(
1
)
˜ ε
˙
a
x
=
x
˜
s
α·(
1
)
α˜ ε
α
˙
a
y
= (
1
)
y
˜ ε
y
ε =
ε
y
ε
α=
s
y
p
y
·σ(
μ)
s
αp
α·σ(
μ)
(12)
Here ε = s g(μ) is called a prediction error, and (2) is the
precision of a Gaussian prior over internal states that parameterise
posterior beliefs about external states. The ~notation denotes gen-
eralised coordinates of motion: see Friston, et al. (2010 ). The ap-
pearance of the precision-weighted prediction errors in this equa-
tion arises from the Laplace assumption alluded to earlier. Because
variational free energy is expressed in terms of log probabilities,
the Laplace assumption licenses a locally quadratic approximation
to the variational free energy. As such, a second order Taylor se-
ries expansion around the posterior mode is sufficient to charac-
terise this functional. Because the gradient of variational free en-
ergy evaluated at the posterior mode is zero (by definition), the
Tabl e 2
Prior beliefs characterising dependencies and independencies.
p
y
=
μa s
1
0
0
0
1
0
0
0
1
Prior probability matrix p
y
over sensed intracellular signals S
y
. Each cell secretes one of three
types of signal.
p
α=
μa s
1
1
0
1
1
1
0
1
1
μ
a
s
Prior probability matrix p
αover sensed extracellular signals S
α. Sensory states ( s ) can interact
with active states ( a ); active states can interact with internal (
μ) and sensory states; sensory
states can interact with active states; every cell exchanges of signals with cells of the same
type.
8 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
Fig. 3. Self-organisation at the first level. This figure illustrates four snapshots at different times during the simulation of the (final stage of) self-organisation of an ensemble
comprising sixteen ’cells’, whose internal and active equations of motion describe a gradient descent on prediction error, relative to sensory states expected by each member
of the ensemble. Every member is endowed with the same prior (genetic) beliefs about what they should signal and sense, depending upon their type (which has to be
inferred on the basis of what they sense). These priors ultimately prescribe a point attractor for the dynamics of the ensemble.
Each cell can then infer (via intracellular
dynamics) its type and behave (via extracellular signalling) accordingly, while moving (via chemotaxis) to a location that fulfils its predictions about its extracellular signals.
The emergent morphology of the ensemble is a cell of cells, with an internal (red) cell in the centre, surrounded by a membrane of active (green) cells in the middle, and
sensory (blue) cells on the periphery. This is the spatial pattern that best fulfils the prior beliefs of all the constituent cells. Note that the cells are initially pluripotent and
only acquire (i.e., infer) their (colour-coded) role in the
Markov blanket, in virtue of their position and signalling with other cells as they self-organise (see for an example
the internal state at time 40 and 70). This means the number of sensory, active and internal cells is not encoded in each cell’s prior; rather, it is an emergent property of
self-organisation under the simple prior that each cell must be a particular type of cell. Furthermore , if a cell infers that it is a particular type, then it becomes that type
because its inference is mediated by intracellular signalling that classifies a cell as one type or another. (For interpretation of the references to color in this figure legend,
the reader is referred to the web version of this article.)
linear term of this expansion vanishes. The result is that free en-
ergy may be expressed as a function that is quadratic in the dis-
tance between each variable and its posterior mode (i.e., quadratic
in prediction errors). The gradient of the free energy is then simply
expressed as a precision-weighted prediction error. Eq. (12) shows
that internal and active states minimise (variational) free energy.
Thus, internal and active states perform a descent to minimise
prediction errors ( Friston, 2010 ). Under these equations of motion,
cells infer their identity based on sensations, while secreting ac-
cording to their role as the ensemble evolves. At the same time,
cells move to a position, where extracellular inputs can be best
predicted.
Eq. (12) also evidences how surprise of observations given a
particular model is reflected in the free energy. The term ‘surprise’
(a.k.a. surprisal) is used here in the information theoretic sense
that quantifies how improbable a given event is ( Tribus, 1961 ). Sur-
prisal is a negative log probability (where the probability in ques-
tion is the marginal likelihood of sensory states under the gener-
ative model). It is this quantity that is upper bounded by the free
energy. Under the quadratic approximation to surprise (and vari-
ational free energy) employed in this paper, surprise scales with
the squared difference between the expected and observed sensory
state. This lets us associate surprise with a squared prediction er-
ror. As shown in Eq. (12) , free energy is also a function of weighted
squared prediction errors. As such, a surprising sensory state, when
there is a mismatch between expected and observed data, leads to
a large, precise, prediction error ( ε) and an increase in free energy.
The results of an exemplar simulation are shown in Fig. 3 . Self-
organisation leads the ensemble to assume a cell-like morphology,
with internal cells in the middle, encircled by active cells, sur-
rounded in turn by sensory cells. Because there are no prior beliefs
either about the location or about the number of cells per type,
these results constitute an emergent property, resulting from the
spatial dependency of interactions among agents. In other words,
the number of cells of each type is not pre-specified or part of
the generative model –it is an emergent property. Furthermore,
the arrangement of differentiated cells is not prescribed by each
cell’s prior beliefs –the arrangement is an emergent property that
is consistent with intercellular signalling. This is interesting in the
sense that the only arrangements that are consistent with a cell’s
beliefs about participating in a Markov blanket are exactly the ar-
rangements that are consistent with a Markov blanket of cells: see
also Cademartiri et al. (2012 ).
Although the results reported in Fig. 3 are sensitive to the pri-
ors that constitute each cell’s generative model (please see discus-
sion), they do not depend sensitively upon the initial states of each
cell. In particular, random fluctuations in the initial positions and
states of the cell do not affect the self organisation illustrated in
Fig. 3 . Exactly the same arrangement can be reproduced quanti-
tatively with different initial randomisations. On the other hand,
small changes to the priors, such as the spatial decay of extracel-
lular signals –or the motility of cells –do affect the final config-
uration. For example, the number of internal cells can be greater
than one and the final positions of the cells vary with different
priors. Interested readers can repeat these simulations using dif-
ferent initial randomisations and prior settings, using open source
code (please see Additional Material).
The simulation presented above illustrates the role of Markov
blankets in a simple but plausible world where only local inter-
actions are permitted, in which prior beliefs (e.g., a genetic code)
have learned that, in order to exist, a living system has to self-
generate boundaries that separate it from –and mediate the cou-
pling with –its environment. As in real biological systems, the
constituents of an ensemble interact with each other, leading to
signal cascades. This signalling rests on inference (e.g., intracellular
dynamics) about the role each cell should play, where action (e.g.,
chemotactic signalling) realises that role. Cells then differentiate,
based upon their prior beliefs (e.g., genetic code). In essence, the
ensemble reaches a steady state characterised by an internal mi-
lieu, which exists –in virtue of assembling its own Markov blan-
ket –as integral part of the system. One could imagine that genes
specify Markovian affordances to produce hierarchical structures;
such as organs, tissues, organisms and so on. On this view, self-
organisation is then a recursive process that engenders, at every
level, the emergence of Markov blankets. We now pursue this us-
ing an extended simulation set up.
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 9
7. Self-organisation: ensemble of ensembles
The second simulation considers ensembles of organelles, each
comprising an ensembles of 16 cells, to illustrate hierarchical self-
organisation; namely, the self-assembly of Markov blankets of
Markov blankets and the requisite coupling between levels. To in-
vestigate the autonomous organisation of (256) cells at two lev-
els, every cell is equipped with the same (genetic beliefs or) pri-
ors about their local and global identity, that is, they share be-
liefs about possible roles at both the ensemble (local) and ensem-
ble of ensemble (global) level. Practically, each cell now had two
sets of hidden states –and prior beliefs – pertaining to their role
at the local (i.e., microscopic) and global (i.e., macroscopic) level.
Crucially, these priors are the same as used in the previous simu-
lation; namely, they prescribe conditional independencies that are
mandated by a Markov blanket at each level:
p
y
p
α=
p
l
y
p
l
α=
p
g
y
p
g
α(13)
Here, the superscripts denote the local (ensemble) and global
(ensemble of ensemble) level. The only additional piece of infor-
mation required in this simulation is how the two levels couple
to each other. For computational expediency, we model the micro-
scopic dynamics (cells within an ensemble) of only one ensemble
of sixteen cells, whereas for the remaining (fifteen) ensembles, we
assume that the average behaviour conforms to the local dynamics
of the simulated ensemble. This is a mean field approximation in
the sense that we discount local fluctuations within each ensem-
ble and assume only their average behaviour is ‘seen’ by any single
ensemble. This allows us to simulate the coupling of sixteen cells
of the fully simulated ensemble with other fifteen ensemble means
(without simulating the other 15 ensembles explicitly). Notice that
the allocation of cells to ensembles does not imply an allocation to
a particular type of ensemble. The ensembles self-allocate as the
Markov blanket emerges at the higher level. In summary, this sim-
ulation illustrates how sixteen cells self-organise in an ensemble
that in turn self-organises with other fifteen identical ensembles,
while describing the coupling between the local and global level.
In particular, for the fully simulated k th ensemble, the global to
local extracellular coupling means that it only senses the average
of all other global signals, while the local to global coupling means
that the average over its n active states informs the dynamics of
the remaining ensembles:
s
g
α,ζ= s
a,k
s
y,k
= a
y,k
=
1
n
ζa
l
y,ζ
(14)
where ζ= 1 : n . The first and second equalities in (13) refer to the
extracellular sensing of cells and the intracellular sensation of the
ensemble, respectively. In terms of local to global coupling, as they
are part of the same ensemble, these predictions will be congruent
with each other and cells will therefore act in concert at the global
level:
g
μg
ζ=
p
g
y
p
g
α·σμg
ζ(15)
Here, μg
ζis the expectation about global coupling for each cell
in the ensemble.
In summary, sixteen cells locally self-organise in an ensemble,
guided by the (local) priors, while interacting with the remaining
fifteen ensembles. Exemplar simulation results are shown in Fig. 4 ,
which illustrates hierarchical self-organisation and pattern forma-
tion of Markov blankets within Markov blankets. The lower panels
of Fig. 4 show the evolution of each cell’s expectations (i.e., differ-
entiation) at the local (left), and global (middle) level. The lower
right panel shows the expectations of the cells of the sixth ensem-
ble about their role at the global level. The sixth ensemble is an
active ensemble at the global level (colour-coded by green), and
this global identity appears to constrain the expectations of every
cell at the local level.
A first interesting aspect of these simulation results is the ra-
pidity with which the cells infer their type and implicitly dif-
ferentiate into internal or blanket cells. This was a generic fea-
ture of the simulations, suggesting that the priors that lead to
hierarchical self-organisation require a fairly rapid specialisation
to enable self-assembly a global attractor. This reflects the circu-
lar causality or positive feedback loop between microscopic and
macroscopic (Markov blanket) dynamics that underlies this kind
of self-organisation. In other words, cells move and secrete chemi-
cals accordingly to their inferred role in the Markov blanket parti-
tion; simultaneously, inference becomes more accurate as cells ap-
proach their final configuration. In the setting of hierarchical self-
organisation, inference at the within ensemble (local) level is char-
acteristically faster than at the between ensemble (global) level.
This reflects a ubiquitous separation of temporal scales that charac-
terises hierarchical self-organisation ( Jung et al., 2015 ; Kiebel et al.,
2008 ; Perdikis et al., 2011 ). A second noteworthy point is the spa-
tial structure emerging in these exemplar simulations: when in-
teractions are spatially dependent, active states – which can influ-
ence but are not influenced by external states –are enclosed by
sensory states, in a similar way as the cytoskeleton resides within
cellular boundaries, or muscles within the epithelium. Analogously,
sensory states, that can influence but are not influenced by inter-
nal states, are segregated from the latter. Notice finally that as cells
must possess a Markov blanket to distinguish each other at one
level of description, at the level above there will be other ensem-
bles (not shown) from which the simulated one will have to differ-
entiate itself –by means of a Markov blanket.
In the example shown in Fig. 4 , local expectations (lower left
panel) converge quickly, with a discernible differentiation after the
first time step. Conversely, local expectations about the global type
take much longer to converge, with the degree of uncertainty in
some cells that only resolves at the final time step (see cell 12
in the lower right panel). This example may reflect the separa-
tion of timescales formalised in synergetics, and in particular by
the slaving principle, which deals with self-organisation and pat-
tern formation in open systems far from thermodynamic equi-
librium ( Carr, 198 1 ; Ginzburg, 1955 ; Haken, 197 8 ; Tschacher and
Haken, 2007 ). In this setting, slow macroscopic patterns of activ-
ity are said to enslave fast microscopic patterns, while the macro-
scopic patterns (known as order parameters) are constituted by the
microscopic patterns; hence circular causality.
8. Discussion
In this paper, we have presented a variational treatment of hier-
archical self-organisation. Given local interactions, carefully crafted
prior (genetic) beliefs about conditional dependencies and inde-
pendencies endow a system with a point attractor comprising in-
ternal states and their Markov blanket. Moreover, applying the
same priors at any hierarchical level leads to the emergence of
Markov blankets within superordinate Markov blankets. A key fea-
ture of the simulations – used in this paper –is the absence of
any explicit target morphology within the prior (e.g., genetic) be-
liefs of the system’s denizens. This is an emergent property, which
appears to have a top-down effect on the blankets below. The sub-
sequent emergence of a cell-like structure is interesting because it
speaks to characteristic spatial boundaries found in most biological
systems; namely, cellular membranes. The isomorphism between
a statistical and spatial boundary rests on spatially dependent in-
teractions among internal and external states. In other words, the
10 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
Fig. 4. This figure shows the (final) results of self-organisation of an ensemble of cells, where each constituent of the ensemble is itself a local ensemble. In this example, 16
local ensembles, each composed of 16 cells, self-organise in a global ensemble. Panel A shows the evolution of the hierarchical system captured at three different moments.
The colour of central circles reflects the inferred cells within each local ensembles; the peripheral circles indicate specialisation of local ensembles within the global ensemble.
(colours: internal –red, active – green, sensory –blue). Note that there are no external states because the ex ternal states comprise the
Markov blankets of other ensembles.
The key thing to observe here is that (slow) self-organisation of local ensembles in a global Markov blanket, starting at time 1, relies on their very existence, that is, on (fast)
self-organisation of cells composing each ensemble. At any temporal and spatial scale, the emergence of a Markov blanket reflects the particular independency structure,
where internal cells do not influence sensory (i.e. surface) cells, in virtue of their separation by active cells. This separation induces conditional independence, because of
the limited range of intracellular signals (that fall off with a Gaussian function of distance). Panel B shows
the same results in an alternative format; namely, the evolution
of expectations about type (i.e., differentiation) of cells within an exemplar local ensemble (left; local expectation), and of the local ensemble within the global ensemble
(middle; global expectation). Notably, the identity of the local ensemble is the result (i.e. the average) of its constituent cell’s beliefs about their role at the higher level. This
means that a local ensemble organises in concert with the other in a global ensemble because its cellular components have communal beliefs about their role (as a local
ensemble) in the global one. These cellular beliefs about
global identity are represented in the left illustration of Pannel B. Here, the exemplar ensemble becomes an active
state. This means that a local ensemble organises in concert with the others in a global ensemble because its cellular components have communal beliefs about their role (as
a local ensemble) at the global level. These cellular ‘beliefs’ about global identity are represented in the left illustration of Panel B. Here, the exemplar ensemble becomes an
active state. This means that all its constituents will come to infer that they participate as an active state at the global level (left; local about global).
Note the differentiation
on both a local and global level; while local expectations about the cells’ role at the global level converge to the same type. Panel C displays the decrease in free energy of
the hierarchical system as self-organisation takes place. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this
article.)
states of a system can include its generalised motion in physical
space, such that the blanket states acquire the attribute of a spatial
location (and velocity). In turn, this means that spatial boundaries
can be identified with statistical boundaries, under conditional de-
pendencies of the flow internal and active states on external states.
As noted above, these normally involve short range statistical cou-
plings or, in physical terms, forces or chemical gradients ( Ao, 2009 ;
Seifert, 2012 ). Crucially, this sort of hierarchical self-organisation is
a recursive process that can repeat itself at higher levels of de-
scription. Interestingly, this perspective enables to identify ensem-
bles that are or are not self-organising: in this optic, the distinction
between a culture of cells and a multicellular organism resides in
the emergence of a Markov blanket at the ensemble level. Another
consequence of this recursive aspect is the absence of a privileged
point of view, when describing hierarchical self-organisation: the
dynamics at every level play the role of macroscopic states at the
level below, and the role of microscopic states at the level above.
An apparent example is morphogenesis, during which chemotaxis
and development of a system’s components occurs within a global
configuration established by morphogen gradients controlling gene
expression ( Balaskas et al., 2012 ; Chang et al., 2002 ). Notably, in
our simulations, morphogen gradients could be interpreted as the
expression of beliefs about the possible role at the ensemble level
that each cell or element at the level below must have. Actuation
of these beliefs (morphogenesis) then occurs through an inference
process (differentiation), and realisation of the corresponding po-
sitional predictions (chemotaxis); for an example that uses exactly
the notion of a Markov blanket and chemotactic signalling, please
see ( Kuchling et al., 2019 ). At a subcellular scale, the same logic
applies, so that self-organisation of microscopic organelles like the
cellular membrane and vesicular structures is constrained by be-
liefs about their role at the cellular level. On the same note, the
dialectic between hierarchical layers formalised above accounts for
the dynamics of multi-agent, complex systems, ranging from cul-
tural ensembles ( Ramstead et al., 2018 ) to complex urban environ-
ments ( Hadfi and Ito, 2016 ).
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 11
Here we associate biochemical structures, gradient flows and ki-
netics with Bayesian priors and belief propagation or updating
as opposed to propositional or representational beliefs. The only
thing that licenses the use of the word ‘belief’ is that the macro-
molecular and cellular kinetics at hand were cast as a gradient
flow on variational free energy. This means that one can inter-
pret the resulting dynamics in terms of Bayesian belief updating
( Winn and Bishop, 2005 ). However, this is an ‘as if’ interpretation;
in the same sense that the folding of a macromolecule to minimise
its thermodynamic free energy in computational chemistry – e.g.,
Lammert et al. (2012 ) – looks ‘as if’ it is trying to minimise (ther-
modynamic) free energy.
The advantage of being able to formulate this kind of self-
organisation in terms of gradient flows on variational (as opposed
to thermodynamic) free energy is that variational free energy is
a functional of a generative model. This means that one can pre-
scribe the desired endpoint of self-organisation in terms of pri-
ors ( Friston et al., 2014 ). One might ask; where do priors come
from? Here, we assume that they are a product of natural selection
(i.e., Bayesian model selection) and are therefore entailed in genet-
ics and epigenetics ( Campbell, 2016 ; Frank, 2012 ; Ramstead et al.,
2018 ). We do not try to provide a detailed account of the ensuing
gradient flows in terms of molecular biology; e.g., Tabata and Takei
(2004 ). However, one can imagine hypotheses based on variational
free energy gradient flows can be tied to intra-and inter-cellular
signalling; e.g., Cervera et al. (2019 ) and Friston et al. (2015 ) and
Kuchling et al. (2019 ). Interestingly, exactly the same challenge
arises in the neurosciences, where the equivalent gradient flows
become neuronal dynamics –and the accompanying challenges be-
come understanding neurophysiology and neuronal microcircuitry
in terms of variational message passing; e.g., Friston et al. (2017 ).
It is interesting to ask how the priors that underwrite this kind
of self-organisation are updated in terms of cell biology. The usual
response to this is to consider hierarchal processes of free energy
minimisation in terms of Bayesian model selection ( Allen, 2018 ;
Campbell, 2016 ; Frank, 2004 ; Ramstead et al., 2018 ). This leads
naturally to a link between natural selection and Bayesian model
selection based upon the evidence bounds afforded by variational
free energy. On this reading, natural selection becomes a form of
structure learning (a.k.a. Bayesian model selection) based upon the
model evidence associated with a particular phenotype. In short,
the (marginal) likelihood of a particular phenotypic structure –in
an evolutionary setting –is optimised in terms of its prevalence
in a population. Because this structure is a model of the exter-
nal milieu, it entails particular priors. If these priors are fit for
purpose in terms of minimising variational free energy then they
will be selected. For example, under some mild assumptions, the
replicator equation can be cast as a Bayesian filter; exactly along
the lines of the above argument: see Frank (2012 ), Geisler and
Diehl (2003 ) and Ramírez and Marshall (2017 ) for further
discussion.
In our simulations, and more generally, we have made some
mild assumptions about the external or environmental states that
contextualise self-organisation at the highest scale considered. This
speaks to an important conceptual point; namely, that a partition-
ing of systemic states into Markov blankets at any scale is always
contextualised by the scale above. In other words, there must be a
permissive context in which self-organisation unfolds at, generally,
faster timescales in the level below ( Schwabl, 2002 ; Jeffery et al.,
2019 ). Strictly speaking, this implies an infinite regress; in the
sense that we can only talk about Markov blankets at one scale of
self-organisation by assuming some attracting set at a higher scale.
Indeed, this recursion can be formalised in terms of the renor-
malisation group that emerges from grouping and course graining
(i.e., reduction) operators on the Markov blankets ( Friston, 2019 ).
The renormalisation group formulation implies that the time con-
stants of self-organisation at larger scales necessarily increases,
when moving from one scale to the next.
Practically, this means that one can assume that the external
states that encompass the formation of Markov blankets at the
highest scale under consideration are changing slowly in relation
to lower scales. The picture that emerges here is that the same
basic (Bayesian or variational) mechanics emerge in a scale-free
fashion at different levels; from the quantum through to the level
of molecular biology; from the scale of cells through to organs,
from phenotypes through to species; all the way up to a cosmo-
logical scale. Although this might sound fanciful, this perspective
has some currency in relation to the differences between quantum,
statistical and classical mechanics. These differences rests largely
upon the suppression of random fluctuations as one progresses
from the small to the large. We have chosen to illustrate coupling
between just two levels; namely, the mesoscopic level of cells and
cell assembly in biology.
One might ask why we have focused on the free energy prin-
ciple, as opposed to other formal descriptions of self-organisation
( Haken, 197 8 ; Kauffman, 1993 ; Kelso, 1995 ; Nicolis and Pri-
gogine, 1977 ); for example, phase transitions in spin models
( Vatansever and Fytas, 2018 ), attractor landscapes in random
Boolean networks ( Gershenson, 2012 ) or Turing style pattern for-
mation via reaction diffusion systems ( Halatek et al., 2018 ). Our
motivation for casting self-organisation as a variational principle
was threefold: first, the free energy principle provides an integra-
tive formalism that should apply to all the above. In other words,
it regards pattern formation and self organisation –as manifest
in reaction diffusion systems and other nonequilibrium steady-
state dynamics –as realisations of the same principle. This is
self-organisation to a random attractor ( Crauel and Flandoli, 199 4 ;
Friston and Ao, 2012 ). When this attracting set possesses a Markov
blanket the free energy principle must apply ( Friston 2013 ). This
means that one can interpret any form of self-organisation –to an
attracting set –in terms of a gradient flow on variational free en-
ergy and, implicitly, self-evidencing ( Hohwy 2016 , Ramstead, Bad-
cock et al. 2017). This is important because most existing ap-
proaches to the dynamics of self-organisation try to reverse engi-
neer an energy functional (or Lyapunov function), given some dy-
namics or equations of motion. The free energy principle allows
one to invert the problem and write down the dynamics as a gra-
dient flow on a free energy functional that is specified in terms of
a generative model ( Friston et al., 2014 ). Crucially, the prior beliefs
of this generative model determine the attracting set. By explicitly
writing down a Markov blanket in these priors, we obtain a system
whose organisation is the most general and essential possible, im-
plicitly in any self-organising system behaving accordingly to the
free energy principle. Finally, using the notion of random dynam-
ical systems ( Arnold and Crauel, 1991 ; Crauel and Flandoli, 1994 ),
the free energy principle allows one to articulate questions about
(and simulate) self-organisation at multiple scales. Here, we have
focused on the link between just two scales; however, by induc-
tion, the conclusions from this paper could be generalised to mul-
tiple levels –at least in principle. This hierarchical aspect would
be challenging to simulate using conventional approaches to pat-
tern formation.
As noted in the introduction, our aim was to provide a nu-
merical analysis of the minimal conditions under which hierarchi-
cal self-organisation emerges –and show that the minimisation
of variational free energy provides a sufficient account, under the
right sort of generative model . Crucially, this does not mean that any
free energy minimising ensemble will show this kind of hierarchi-
cal self organisation. Our agenda was not to suggest all systems
self-organise hierarchically; rather, we wanted to explain the ex-
istence of the hierarchical structures seen in biology, in terms of
variational principles.
12 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
Practically, the behaviour illustrated in the above simulations
depends sensitively on priors in the generative model and initial
conditions. In more details, setting prior beliefs that specify an at-
tractor in state space (endowed with a Markov blanket) is not triv-
ial, and growth in the size of the system further complicates the
task. This is reminiscent of the emergence –at later evolution-
ary stages –of bigger or more complex organisms. Furthermore,
these simulations show the final stage of self-organisation, where
the system finds itself in the vicinity of the attractor; initialising
the system too far from this attracting point (e.g., by adding ex-
tra perturbation to location) can prevent the system from elabo-
rating a bounded structure (i.e. existing). This, in turn, speaks to
the difference between these (minimal and general) simulations
and the complexity of (specific) biological systems, endowed with
a plethora of control and feedback mechanisms, which underwrite
robustness to perturbations. This sensitivity to prior parameters
and initial states leads to some interesting questions. For exam-
ple, questions about the rate at which the structures stabilises
and how this depends upon the parameters (c.f., rate spatial decay
constants) that constitute each cell’s priors. How do the initial val-
ues (e.g., position) affect self-organisation? These questions raise
an interesting issue: is there anything special about a hierarchi-
cal structure that would explain its prevalence in biotic systems.
One speculation here might be that a hierarchical (self-similar) ar-
chitecture of Markov blankets might be a free energy minimising
solution on a longer time scale, such as evolution. This should be
possible to address via simulated (pharmacological) lesion exper-
iments that block the formation of higher-order Markov blankets.
One can then measure the free energy with and without hierarchi-
cal self-organisation and consider the implications for natural se-
lection. In variational formulations, natural selection is treated as a
form of Bayesian model selection, based upon model evidence or
variational free energy ( Campbell, 2016 ; Frank, 2012 ). We hope to
pursue this in subsequent work.
9. Conclusion
This work suggests that Markov blankets are a fundamental
characteristic of biological systems. Their presence is necessary for
life –as they underwrite an existential separation of the system
from its environment, while preserving its interactions. The hier-
archical organisation of complex systems –like living organisms
implies that the self-similar organisation of Markov blankets may
be evident at any level of biological structure. From the point of
view of dynamical systems, Markov blankets are attractors, attract-
ing fast microscopic dynamics, while underwriting the emergence
of macroscopic (order) parameters. This circular causality nicely
captures the self-organisation of biological systems, which evolve
autonomously with a morphology (Markov blanket) that is neces-
sarily predisposed to a selective coupling with external states. The
natural place –where these attractors might be specified –is the
genetic code. Clearly, this is rather speculative; however, it is pos-
sible that the astonishing diversity of flora and fauna we witness
might reflect the fact that, in a world where signals are spatially
dependent, Markov blankets are synonymous with existence.
Additional information
Simulations: The simulations reported in this paper can
be reproduced using the open access academic software SPM
( http://www.fil.ion.ucl.ac.uk/spm/software/ ). The key routines
are DEM_cells.m and DEM_cells_cells.m that illustrate self-
organisation of a single ensemble and ensemble of ensembles
respectively.
DEM_cells.m: This demo illustrates self-organisation in an en-
semble of (sixteen) cells using the same principles described in
DEM_morphogenesis , but using a simpler generative model. Over-
all, the dynamics of these simulations show how one can pre-
scribe a point attractor for each constituent of an ensemble that
endows the ensemble with a point attractor to which it converges.
In this example, we consider the special case where the point at-
tractor is itself a Markov blanket. In other words, cells come to
acquire dependencies, in terms of intracellular signalling, that con-
form to a simple Markov blanket with intrinsic or internal cells,
surrounded by active cells that are, in turn, surrounded by sen-
sory cells. This organisation rests upon intracellular signals and
active inference using generalised (second-order) variational filter-
ing. In brief, the hidden causes driving action (migration and sig-
nalling) are expectations about cell type. These expectations are
optimised using sensory signals; namely, the signals generated by
other cells. By equipping each cell with prior beliefs about what it
would sense if it was a particular cell type (i.e., internal, active or
sensory), they act (i.e., move and signal) to behave and infer their
role in an ensemble of cells that itself has a Markov blanket. In a
DEM_cells_cells.m , we use this first-order scheme to simulate the
hierarchical emergence of Markov blankets; i.e., ensembles of cells
that can be one of three types at the local level; independently of
their time at the global level.
DEM_cells_cells.m: This demo is a hierarchical extension of
DEM_cells.m , where we have 16 ensembles comprising 16 cells.
Each cell has a generative model (i.e., prior beliefs) about its pos-
sible local and global cell types (i.e., internal, active or sensory).
Given posterior beliefs about what sort of self it is at the local and
global level, it can then predict the local and global intracellular
signals it would expect to receive. The ensemble of ensembles then
converges to a point attractor; where the ensemble has a Markov
blanket and each element of the ensemble comprises a cell –that
is itself a Markov blanket. The focus of this simulation is how the
local level couples to the global level and vice versa. For simplic-
ity (and computational expediency) we only model one ensemble
at the local level and assume that the remaining ensembles con-
form to the same (local) dynamics. This is effectively a mean field
approximation, where expectations of a cell in the first ensemble
about its global type are coupled to the corresponding expectations
and the ensemble level, and vice versa. The results of this simula-
tion are provided in the form of a movie and graphs.
Author contribution
Ensor Rafael Palacios and Karl Friston conceived the ideas,
wrote the manuscript and performed the simulations. Adeel Razi
conceived the ideas, reviewed the manuscript text and contributed
to the simulations. Thomas Parr and Michael Kirchhoff reviewed
the manuscript.
Declaration of Competing Interest
No competing financial interests to report.
CRediT authorship contribution statement
Ensor Rafael Palacios: Conceptualization, Methodology, Writ-
ing - original draft, Visualization. Adeel Razi: Conceptualization,
Writing - original draft. Thomas Parr: Writing - original draft.
Michael Kirchhoff: Writing - original draft. Karl Friston: Concep-
tualization, Methodology, Writing - original draft, Visualization.
Acknowledgements
This work was funded by the Wellcome trust. KJF is
funded by a Wellcome Trust Principal Research Fellowship (Ref:
088130/Z/09/Z ). We also thank our reviewers for helpful guidance
in describing this work.
E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089 13
Supplementary materials
Supplementary material associated with this article can be
found, in the online version, at doi: 10.1016/j.jtbi.2019.110089 .
References
Alcocer-Cuarón, C., Rivera, A.L., Castaño, V.M., 2014. Hierarchical structure of biolog-
ical systems: a bioengineering approach. Bioengineered doi: 10.4161/bioe.26570 .
Allen, M., 2018. The foundation: mechanism, prediction, and falsification in Bayesian
enactivism: comment on “Answering Schrödinger’s questi on: a free-energy for-
mulation” by Maxwell James Désormeau Ramstead et al.. Phys. Life Rev.
doi: 10.1016/j.plrev.2018.01.007 .
Ao, P., 2009. Global view of bionetwork dynamics: adaptive landscape. J. Genet.
Genom. 36, 63–73. doi: 10.1016/S1673- 8527(08)60093- 4.Global .
Arnold, L., Crauel, H., 1991. Random dynamical systems. In: Lyapunov Exponents
Proceedings of a Conference Held in Oberwolfach, May 28 - June 2, 1990 doi: 10.
10 07/BFb0 086654
.
Auletta , G., 2013. Information and metabolism in bacterial chemotaxis. Entropy 15,
311–326. doi: 10. 33 90/e15 010311 .
Auletta , G., 2010. A paradigm shift in biology? Information 1, 28–59. doi: 10.3390/
info1010028 .
Ay, N., Bertschinger, N., Der, R., Güttler, F., Olbrich, E., 2008. Predictive information
and explorative behavior of autonomous robots. Eur. Phys. J. B 63, 329–339.
doi: 10.1140/epjb/e20 08-0 0175-0 .
Balaskas, N., Ribeiro, A., Panovska, J., Dessaud, E., Sasai, N., Page, K.M., Briscoe, J.,
Ribes, V., 2012. Gene regulatory logic for reading the sonic hedgehog signaling
gradient in the vertebrate neural tube. Cell 148, 273–284. doi: 10.1016/j.cell.2011.
10. 047 .
Baltieri,
M., Buckley, C.L., 2018. A probabilistic interpretation of pid controllers using
active inference. International Conference on Simulation of Adaptive Behavior
doi: 10.1007/978- 3- 319- 97628- 0 _ 2 .
Beer, R.D., 2014. The cognitive domain of a glider in the game of life. Artif. Life
doi: 10.1162/ARTL _ a _ 00125 .
Bennett, C.H., 2003. Notes on Landauer’s principle, reversible computation, and
Maxwell’s demon. Stud. Hist. Philos. Sci. Part B doi: 10.1016/S1355-2198(03)
0 0 039-X .
Cademartiri, L., Bishop, K.J.M., Snyder, P.W ., Ozin, G.A., 2012. Using shape for self-
assembly. Philos. Trans. R. Soc. A doi: 10.1098/rsta.2011.0254 .
Campbell, J.O., 2016.
Universal darwinism as a process of Bayesian inference. Front.
Syst. Neurosci. doi: 10.3389/fnsys.2016.0 0 049 .
Carr, J., 1981. Applications of centre manifold theory. Appl. Math. Sci. doi: 10.10 07 /
978- 1- 4612- 5929- 9 .
Cervera, J., Manzanares, J.A., Mafe, S., Levin, M., 2019. Synchronization of bioelectric
oscillations in networks of nonexcitable cells: from single-cell to multicellular
states. J. Phys. Chem. B. doi: 10.1021/acs.jpcb.9b01717 .
Chang, H.Y., Chi, J.-T.J., Dudoit, S., Bondre, C., van de Rijn, M., Botstein, D.,
Brown, P.O ., Van De, Rijn M., Botstein, D., Brown, P.O. , 20 02. Diversity, topo-
graphic differentiation, and positional memory in human fibroblasts. Proc.
Natl.
Acad. Sci. 99, 12877–12882. doi: 10.1073/pnas.162488599 .
Clark, A., 2017. How to knit your own Markov blanket : resisting the sec-
ond law with metamorphic minds. Philos. Predict. Coding 1–31. doi: 10.15502/
9783958573031 .
Clark, A., 2009. Supersizing the Mind: Embodiment, Action, and Cognitive Extension
doi: 10.1093/acprof:oso/9780195333213.0 01.0 0 01 .
Conant, R.C., Ashby, W.R., 197 0. Every good regulator of a system must be
a good model of that system. Int. J. Syst. Sci. 1, 89–97. doi: 10.1080/
0 02077270 08920220 .
Crauel, H., Flandoli, F., 1994. Attractors for random dynamical systems. Probab. The-
ory Relat. Fields 100 , 365–393. doi: 10. 100 7/ BF01193705
.
Crauel, H., Then, S., Mathematik, S., 1999. Global random attractors are uniquely
determined by attracting deterministic compact sets. Ann. di Mat. Pura Appl.
CLXXVI, 57–72. doi: 10.1007/BF02505989 .
Ellis, G.F.R., Noble, D., O’Connor, T., 2012. Top-down causation: an integrating theme
within and across the sciences? Introduction. Interface Focus 2, 1–3. doi: 10.
1098/rsfs.2011.0110 .
England, J.L., 2013. Statistical physics of self-replication. J. Chem. Phys. doi: 10.1063/
1.4818538 .
Evans, D.J., Searles, D.J., 2002. The fluctuation theorem. Adv. Phys. 51, 1529–1585.
doi: 10.1080/0 0 018730210155133 .
Shwabl, F., 2002. Phase Transitions, Scale Invariance, Renormalization Group Theory,
and Percolation, Statistica. Springer, Berlin
doi: 10.1007/978- 3- 662- 04702- 6 _ 7 .
Frank, S.A., 2012. Natural selection. V. How to read the fundamental equations of
evolutionary change in terms of information theory. J. Evol. Biol. doi: 10. 1111 /
jeb.12010 .
Frank, T.D., 2004. Stochastic feedback, nonlinear families of Markov processes, and
nonlinear Fokker-Planck equations. Phys. A Stat. Mech. its Appl. 331, 391–408.
doi: 10.1016/j.physa.2003.09.056 .
Friston, Levin, M., Sengupta, B., Pezzulo, G., 2015. Knowing one’s place: a free-
energy approach to pattern regulation. J. R. Soc. Interface 12, 20141383. doi: 10.
1098/rsif.2014.1383 .
Friston, Daunizeau, J., Kilner, J., Kiebel, S.J., 2010a. Action and behavior: a free-energy
formulation. Biol. Cybern. 102 , 227–260. doi: 10.10 07/s 0 0422- 010- 0364- z .
Friston, K. , 2019. A Free Energy Principle for a Particular Physics, pp. 1–148 .
Friston, K., 2013. Life as we know it. J. R. Soc. Interface 10, 20130475. doi: 10.109 8/
rsif.2013.0475 .
Friston, K., 2010. The free-energy principle: a unified brain theory? Nat. Rev. Neu-
rosci. 11, 127–138. doi: 10.1038/nrn2787 .
Friston, K., Ao, P., 2012. Free energy, value, and attractors. Comput. Math. Methods
Med. 2012. doi: 10.1155/2012/937860 .
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., Pezzulo, G., 2017. Active infer-
ence: a process theory. Neural Comput. doi: 10.1162/NECO _ a _ 00912 .
Friston, K., Sengupta, B., Auletta , G., 2014. Cognitive dynamics: from attractors to
active inference. Proc. IEEE 102, 427–445. doi: 10.1109/JPROC.2014.2306251 .
Friston, K., Stephan, K., Li, B., Daunizeau, J., 2010b. Generalised filtering. Math. Probl.
Eng. 2010. doi: 10.1155/2010/621670 .
Fuster, J., 2004. Upper processing stages of the perception-action cycle. Trends Cogn.
Sci. 8, 143–145. doi: 10.1016/j.tics.20
04.02.0 04 .
Geisler, W.S., Diehl, R.L., 2003. A Bayesian approach to the evolution of percep-
tual and cognitive systems. Cogn. Sci. 27, 379–402. doi: 10.1016/S0364-0213(03)
0 0 0 09- 0 .
Gershenson, C., 2012. Guiding the self-organization of random Boolean networks.
Theory Biosci. doi: 10.1007/s12064- 011- 0144- x .
Ginzburg, V.L., 1955. On the theory of superconductivity. Nuovo Cim. Ser. 10. doi: 10.
100 7/ BF02731579 .
Hadfi, R., Ito, T., 2016. Holonic multiagent simulation of complex adaptive sys-
tems. Communications in Computer and Information Science doi: 10 .10 07 /
978- 3- 319- 39387- 2 _ 12 .
Haken, H. , 1978. Synergetics: An Introduction.
Springer .
Halatek, J., Brauns, F., Frey, E., 2018. Self-organization principles of intracellular pat-
tern formation. Philos. Trans. R. Soc. B Biol. Sci. doi: 10.1098/rstb.2017.0107 .
Hilgetag, C.C., O’Neill, M.A., Young, M.P., 20 0 0. Hierarchical organization of macaque
and cat cortical sensory systems explored with a novel network processor. Phi-
los. Trans. R. Soc. B Biol. Sci. doi: 10.1098/rstb.20 0 0.0550 .
Hohwy, J. , 2016. The self-evidencing brain.noûs. Noûs 50 (2), 259–285 .
Jarzynski, C., 1997. Nonequilibrium equality for free energy differences. Phys. Rev.
Lett. doi: 10.1103/PhysRevLett.78.2690 .
Jung, M., Hwang, J., Tani , J., 2015. Self-organization of spatio-temporal hierarchy
via learning of dynamic visual image patterns on action sequences. PLoS One.
doi: 10.1371/journal.pone.0131214 .
Kauffman, S., 1995. At Home in the Universe: The Search for the Laws of Self-
Organization and Complexity doi: 10.1017/S001667230 0033772 .
Kauffman, S.A., 1993. The Origins of Order. Oxford University Press doi: 10.1002/bies.
950170412 .
Kelso, J.A.S., 1995. The self-organization of brain and behavior. Dyn. Patterns doi: 10.
1016/j.eurger.2014.02.005 .
Kiebel, S.J., Daunizeau, J., Friston, K.J., 2008. A hierarchy of time-scales and the brain.
PLoS Comput. Biol. 4. doi: 10.1371/journal.pcbi.10 0 0209 .
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., Kiverstein, J., 2018. The markov
blan-
kets of life: autonomy, active inference and the free energy principle. J. R. Soc.
Interface. doi: 10.1098/rsif.2017.0792 .
Kuchli ng, F. , Friston, K., Georgiev, G., Levin, M., 2019. Morphogenesis as Bayesian
inference: a variational approach to pattern formation and control in complex
biological systems. Phys. Life Rev. doi: 10.1016/j.plrev.2019.06.001 .
Lammert, H., Noel, J.K., Onuchic, J.N., 2012. The dominant folding route mini-
mizes backbone distortion in SH3. PLoS Comput. Biol. doi: 10.1371/journal.pcbi.
100 2776 .
MacKay, D.J.C., 2003. Information Theory, Inference and Learning Algorithms. Cam-
bridge University Press doi: 10.1017/CBO9781107415324.004 .
Maturana, H.R., 1974. The organization of the living : a theory of
the living or-
ganization. Int. J. Hum.-Comput. Stud. 51, 14 9–16 8. doi: 10.1016/S0020-7373(75)
80015-0 .
Mitrea, D.M., Kriwacki, R.W., 2016. Phase separation in biology; functional organiza-
tion of a higher order. Cell Commun. Signal doi: 10.1186/s12964-015-0125-7 .
Nicolis, G., Prigogine, I., 1977. Self-Organization in Nonequilibrium Systems: from
Dissipative Structure s to Order Through Fluctuations, 491. John Wiley Sons
doi: 10.1086/410785 .
Parrondo, J.M.R., Horowitz, J.M., Sagawa, T., 2015. Thermodynamics of information.
Nat. Phys. doi: 10.1038/nphys3230 .
Pearl, J., 1988. Probabilistic Reasoning in Intelligent Systems. Morgan Kauffmann San
Mateo doi: 10.2307/2026705 .
Pellet, J.-.P. , Elisseeff, A. , 2008. Using Markov blankets
for causal structure learning.
J. Mach. Learn. Res. https://doi.org/10.1.1.210.8312 .
Perdikis, D., Huys, R., Jirsa, V.K., 2011. Time scale hierarchies in the functional or-
ganization of complex behaviors. PLoS Comput. Biol. doi: 10.1371/journal.pcbi.
100 2198 .
Landauer, R. , 1961. Irreversibility and heat generation in the computing process. IBM
J. Res. Dev .
Ramírez, J.C., Marshall, J.A.R., 2017. Can natural selection encode Bayesian priors? J.
Theor. Biol. doi: 10.1016/j.jtbi.2017.05.017 .
Ramstead, M.J.D., Badcock, P.B ., Friston, K.J., 2018. Answering Schrödinger’s question:
a free-energy formulation. Phys. Life Rev. doi: 10.1016/j.plrev.2017.09.001 .
Jefferey K., Pollack R., Rovelli C., 2019. On the statistical mechanics of life:
Schrödinger revisited. ArXiv.
Schrödinger, E. , 194 4. What is Life? The Pysical Aspect of the Living Cell .
Seifert, U., 2012. Stochastic thermodynamics, fluctuation theorems and molecular
machines. Rep. Prog. Phys. 75, 126001. doi: 10.1088/0034-4885/75/12/126001 .
Sekimoto, K., 1998. Langevin equation and thermodynamics. Prog. Theor. Phys.
Suppl. doi: 10.1143/PTPS.130.17 .
14 E.R. Palacios, A. Razi and T. Parr et al. / Journal of Theoretical Biology 486 (2020) 110089
Taba ta , T., Take i, Y., 2004. Morphogens, their identification and regulation. Develop-
ment. doi: 10.1242/dev.01043 .
Tribus, M., 1961. Information theory as the basis for thermostatics and thermody-
namics. J. Appl. Mech. 28, 1–8. doi: 10.1115/1.3640461 .
Tsch ache r, W., Haken, H., 2007. Intentionality in non-equilibrium systems? The func-
tional aspects of self-organized pattern formation. New Ideas Psychol. 25, 1–15.
doi: 10.1016/j.newideapsych.20 06.09.0 02 .
Varela, F.G., Maturana, H.R., Uribe, R., 1974. Autopoiesis: the organization of living
systems, its characterization and a model. BioSystems 5, 187–196. doi: 10.1016/
0303- 2647(74)90031- 8 .
Vatansever, E., Fytas, N.G., 2018. Dynamic phase transition of the Blume-Capel
model in an oscillating magnetic field. Phys. Rev. E. doi: 10.1103/ Phy sRe vE .97.
012122 .
Virgo, N. , 2011. Thermodynamics and the Structure of Living Systems D. Phil. Diss. .
Virgo, N., Egbert, M.D., Froese, T., 2011. The role of the spatial bound-
ary in autopoiesis. European Conference on Artificial Life doi: 10. 100 7/
978- 3- 642- 21283- 3 _ 30 .
Whitesides, G.M., 2002. Self-Assembly at all scales. Science 295 (80–.).), 2418–2421.
doi: 10.1126/science.1070821 .
Winn, J. , Bishop, C.M. , 2005. Variational message passing. J. Mach. Learn. Res .
Yufik , Y.M ., Friston, K., 2016. Life and understanding: the origins of “Understanding”
in self-organizing nervous systems. Front. Syst. Neurosci. doi: 10.3389/fnsys.2016.
0 0 098 .
... To formulate and solve the problem of the evolutionarily optimal simplest neural network, it is convenient to use the toolkit for the analysis of stochastic equations of the form (1), developed in the context of nonequilibrium statistical physics and also previously applied to the analysis of functioning neural networks (but not their evolutionary optimization). 18,27,28,32,33,[35][36][37][38][39][40][41][42] Let us move from considering an individual trajectory of one instance of the system to considering the probability distribution of trajectories of the ensemble of systems. Let P(x,r,t) be the probability density of such distribution, that is, the probability of finding the given system at time t with values of the first variable in the range from x to x+dx and with values of the second variable in the range from r to r+dr equals P(x,r,t)dxdr. ...
... Note that the exact solution presented here is different from what one might intuitively expect in a mean-field approximation (see Appendix B), emphasizing the importance of strict mathematical analysis, as opposed to verbal reasoning alone. We attract the reader's attention to this result because the mean-field approximation has been used in this field, 35,39,40,42 though in ways different than equation (54), sometimes without explicitly checking its accuracy. ...
Preprint
Full-text available
This study presents a novel, highly simplified model of the nervous system, inspired by one hypothetical scenario of its origin. The model is designed to accommodate both mathematical derivations and numerical simulations, offering a template for studying generalized principles and dynamics beyond the specifics of the referenced origin scenario. The model offers a holistic perspective by treating the nervous system and the environment (in their simplest forms) as parts of one system and, together with a companion paper, notes the key role of evolutionary factors (in this model, predator evasion) in shaping the properties of the nervous system. To emphasize these fundamental principles, some aspects, such as the highly dimensional nature of the networks or detailed molecular mechanisms of their functioning, are omitted in the current version. Analytically, the model facilitates insights into the stationary distribution as a solution to the Fokker-Planck equation and the corresponding effective potential and rotation (solenoidal) terms. Numerically, it generates biologically plausible (given its high abstraction) solutions and supports comprehensive sampling with limited computational resources. Noteworthy findings from the study include limitations of the commonly used weak noise approximation and the significance of rigorous mathematical analysis over heuristic interpretations of the potential. We hope that this abstract model will serve as a fruitful tool for better understanding a complete set of principles for modeling nervous systems.
... This take on minimising free energy is compatible with Levin's conception of teleonomic systems, as it holds that the boundaries of such systems need not align strictly with the biophysical boundaries of a living organism. In other words, teleonomic systems can be hierarchically composed of Markov blankets of Markov blankets, and there is agency and teleonomy all the way down (Palacios et al., 2020). This hierarchy extends downward to individual cells and extends outward and upward to include not only individuals but also large collective systems (Badcock et al., 2019;Hesp et al., 2019;Ramstead et al., 2018), such as societies and populations of individuals (Kirchhoff et al., 2018). ...
Article
Full-text available
This paper addresses the conceptualisation and measurement of goal-directedness. Drawing inspiration from Ernst Mayr’s demarcation between multiple meanings of teleology, we propose a refined approach that delineates different kinds of teleology/teleonomy based on the temporal depth of generative models of self-organising systems that evince free energy minimisation.
... In the brain, this means that the activity of neural population can be described in terms of their ensemble properties (e.g., statistical averages, Friston et al., 2020). Random fluctuations at the level of individual neurons can be averaged out, because they do not influence the behaviour of the ensemble (Palacios et al., 2020). ...
Article
Full-text available
Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a classical (von Neumann) architecture. I argue that at least one of these properties, viz. a certain kind of causal flow, can be used to draw a distinction between systems that merely simulate, and those that actually replicate consciousness.
... This persistent occurrence of negative outcomes suggests, at least initially, that, over appropriate time horizons, we really should be pessimists. Our opportunities for self-evidencing are, after all, limited by all the uncertainty we encounter in the shape of dissipative forces that, in the end, will literally tear us apart (that is, irreparably damage our organismic boundaries or Markov blankets [30]). No one lives forever, and societies and species vanish, having existed only as tiny blips on the radar of the universe. ...
Article
Full-text available
Karl Friston’s free-energy principle casts agents as self-evidencing through active inference. This implies that decision-making, planning and information-seeking are, in a generic sense, ‘wishful’. We take an interdisciplinary perspective on this perplexing aspect of the free-energy principle and unpack the epistemological implications of wishful thinking under the free-energy principle. We use this epistemic framing to discuss the emergence of biases for self-evidencing agents. In particular, we argue that this elucidates an optimism bias as a foundational tenet of self-evidencing. We allude to a historical precursor to some of these themes, interestingly found in Machiavelli’s oeuvre, to contextualise the universal optimism of the free-energy principle.
... Lastly, the FEP can also be considered from the perspective of a Markov blanket (see Palacios et al., 2020;Parr et al., 2020 for detailed explanation), which instantiates a statistical boundary between internal states and external states. In other words, internal (e.g., in the brain) and external (e.g., in the external world) states are conditionally independent: they can only influence one another through blanket states. ...
Article
Full-text available
Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design—such as “minimal search” criteria from theoretical syntax—adhere to the FEP. This affords a greater degree of explanatory power to the FEP—with respect to higher language functions—and offers linguistics a grounding in first principles with respect to computability. While we mostly focus on building new principled conceptual relations between syntax and the FEP, we also show through a sample of preliminary examples how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel–Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing–Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference.
... Hence, although living systems need to interact with the environment, since they are open systems, they also need to distinguish themselves from the external environment: in this context, conditional independence occurs: when the state on the organism/environment boundary, i.e., the MB, is fixed, what happens on one side of the boundary does not influence what happens on the other side. This means that all the necessary information for explaining the behaviour of the internal system is given by the state of the blanket [26] [27]. ...
Conference Paper
Full-text available
The aim of this short paper is to present the connection between the cognitive framework of predictive processing and active inference and existing implementation in AI tools, in order to investigate whether these models of cognition can be used to design explainable and sustainable AI architectures.
... In this context, conditional independence occurs: when the state of the organism/environment boundary, i.e. the MB, is fixed, what happens on one side of the boundary does not influence what happens on the other side. This means that all the necessary information for explaining the behavior of the internal system is given by the state of the blanket [19,20]. Considering all that said above, concerns may raise about the exact nature of MBs, from the evidence that literature looks to be filled with confusion about the two different souls of MB, since they are addressed in some cases as statistical models, and in others as real objects. ...
Conference Paper
This paper's aim is twofold: on the one hand, to provide an overview of the state of the art of some kind of Bayesian networks, i.e. Markov blankets (MB), focusing on their relationship with the cognitive theories of the free energy principle (FEP) and active inference. On the other hand, to sketch how these concepts can be practically applied to artificial intelligence (AI), with special regard to their use in the field of sustainable development. The proposal of this work, indeed, is that understanding exactly to what extent MBs may be framed in the context of FEP and active inference, could be useful to implement tools to support decision-making processes for addressing sustainability. Conversely , looking at these tools considering how they could be related to those theoretical frameworks, may help to shed some light on the debate about FEP, active inference and its linkages with MBs, which still seems to be clarified. For the above purposes, the paper is organized as follows: after a general introduction, section 2 explains what a MB is, and how it is related to the concepts of FEP and active inference. Thus, section 3 focuses on how MBs, joint with FEP and active inference, are employed in the field of AI. On these grounds, section 4 explores whether MBs, FEP, and active inference can be useful to face the issues related to sustainability.
Article
The immune system is a central component of organismic function in humans. This paper addresses self‐organization of biological systems in relation to—and nested within—other biological systems in pregnancy. Pregnancy constitutes a fundamental state for human embodiment and a key step in the evolution and conservation of our species. While not all humans can be pregnant, our initial state of emerging and growing within another person's body is universal. Hence, the pregnant state does not concern some individuals but all individuals. Indeed, the hierarchical relationship in pregnancy reflects an even earlier autopoietic process in the embryo by which the number of individuals in a single blastoderm is dynamically determined by cell– interactions. The relationship and the interactions between the two self‐organizing systems during pregnancy may play a pivotal role in understanding the nature of biological self‐organization per se in humans. Specifically, we consider the role of the immune system in biological self‐ organization in addition to neural/brain systems that furnish us with a sense of self. We examine the complex case of pregnancy, whereby two immune systems need to negotiate the exchange of resources and information in order to maintain viable self‐regulation of nested systems. We conclude with a proposal for the mechanisms—that scaffold the complex relationship between two self‐organising systems in pregnancy—through the lens of the Active Inference, with a focus on shared Markov blankets.
Article
This paper concerns the distributed intelligence or federated inference that emerges under belief-sharing among agents who share a common world—and world model. Imagine, for example, several animals keeping a lookout for predators. Their collective surveillance rests upon being able to communicate their beliefs—about what they see—among themselves. But, how is this possible? Here, we show how all the necessary components arise from minimising free energy. We use numerical studies to simulate the generation, acquisition and emergence of language in synthetic agents. Specifically, we consider inference, learning and selection as minimising the variational free energy of posterior (i.e., Bayesian) beliefs about the states, parameters and structure of generative models, respectively. The common theme—that attends these optimisation processes—is the selection of actions that minimise expected free energy, leading to active inference, learning and model selection (a.k.a., structure learning). We first illustrate the role of communication in resolving uncertainty about the latent states of a partially observed world, on which agents have complementary perspectives. We then consider the acquisition of the requisite language—entailed by a likelihood mapping from an agent's beliefs to their overt expression (e.g., speech)—showing that language can be transmitted across generations by active learning. Finally, we show that language is an emergent property of free energy minimisation, when agents operate within the same econiche. We conclude with a discussion of various perspectives on these phenomena; ranging from cultural niche construction, through federated learning, to the emergence of complexity in ensembles of self-organising systems.
Chapter
To adapt an autonomous system to a newly given cognitive goal, we propose a method to dynamically combine multiple perception-action loops. Focusing on the fact that humans change their embodiment during development, the perception-action loops associated with each body part are combined. Applying the method to an end-effector movement task with a robot arm shows that the joints necessary to accomplish the target task are selectively moved in practical time. The result suggests that the robot adapts to the newly given cognitive goal and that developmental embodiment is an essential component in the design of an autonomous system.
Article
Full-text available
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can accompany an increase in macroscopic order. We view metabolism in living things as microscopic variables directly driven by the second law of thermodynamics, while viewing the macroscopic variables of structure, complexity and homeostasis as mechanisms that are entropically favored because they open channels for entropy to grow via metabolism. This perspective reverses the conventional relation between structure and metabolism, by emphasizing the role of structure for metabolism rather than the converse. Structure extends in time, preserving information along generations, particularly in the genetic code, but also in human culture. We argue that increasing complexity is an inevitable tendency for systems with these dynamics and explain this with the notion of metastable states, which are enclosed regions of the phase-space that we call “bubbles,” and channels between these, which are discovered by random motion of the system. We consider that more complex systems inhabit larger bubbles (have more available states), and also that larger bubbles are more easily entered and less easily exited than small bubbles. The result is that the system entropically wanders into ever-larger bubbles in the foamy phase space, becoming more complex over time. This formulation makes intuitive why the increase in order/complexity over time is often stepwise and sometimes collapses catastrophically, as in biological extinction.
Chapter
Full-text available
In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition within general mathematical frameworks derived from information and control theory, statistical physics and machine learning. The connections between information and control theory have been discussed since the 1950’s by scientists like Shannon and Kalman and have recently risen to prominence in modern stochastic optimal control theory. However, the implications of the confluence of these two theoretical frameworks for the biological sciences have been slow to emerge. Here we argue that if the active inference proposal is to be taken as a general process theory for biological systems, we need to consider how existing control theoretical approaches to biological systems relate to it. In this work we will focus on PID (Proportional-Integral-Derivative) controllers, one of the most common types of regulators employed in engineering and more recently used to explain behaviour in biological systems, e.g. chemotaxis in bacteria and amoebae or robust adaptation in biochemical networks. Using active inference, we derive a probabilistic interpretation of PID controllers, showing how they can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation under simple linear generative models.
Article
Full-text available
The free-energy principle (FEP) is a formal model of neuronal processes that is widely recognised in neuroscience as a unifying theory of the brain and biobehaviour. More recently, however, it has been extended beyond the brain to explain the dynamics of living systems, and their unique capacity to avoid decay. The aim of this review is to synthesise these advances with a meta-theoretical ontology of biological systems called variational neuroethology, which integrates the FEP with Tinbergen's four research questions to explain biological systems across spatial and temporal scales. We exemplify this framework by applying it to Homo sapiens, before translating variational neuroethology into a systematic research heuristic that supplies the biological, cognitive, and social sciences with a computationally tractable guide to discovery.
Article
Full-text available
Dynamic patterning of specific proteins is essential for the spatio-temporal regulation of many important intracellular processes in prokaryotes, eukaryotes and multicellular organisms. The emergence of patterns generated by interactions of diffusing proteins is a paradigmatic example for self-organization. In this article, we review quantitative models for intracellular Min protein patterns in Escherichia coli, Cdc42 polarization in Saccharomyces cerevisiae and the bipolar PAR protein patterns found in Caenorhabditis elegans. By analysing the molecular processes driving these systems we derive a theoretical perspective on general principles underlying self-organized pattern formation. We argue that intracellular pattern formation is not captured by concepts such as ‘activators’, ‘inhibitors’ or ‘substrate depletion’. Instead, intracellular pattern formation is based on the redistribution of proteins by cytosolic diffusion, and the cycling of proteins between distinct conformational states. Therefore, mass-conserving reaction–diffusion equations provide the most appropriate framework to study intracellular pattern formation. We conclude that directed transport, e.g. cytosolic diffusion along an actively maintained cytosolic gradient, is the key process underlying pattern formation. Thus the basic principle of self-organization is the establishment and maintenance of directed transport by intracellular protein dynamics. This article is part of the theme issue ‘Self-organization in cell biology’.
Article
Full-text available
This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme-a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets-all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment.
Article
Recent advances in molecular biology such as gene editing [1], bioelectric recording and manipulation [2] and live cell microscopy using fluorescent reporters [3], [4] – especially with the advent of light-controlled protein activation through optogenetics [5] – have provided the tools to measure and manipulate molecular signaling pathways with unprecedented spatiotemporal precision. This has produced ever increasing detail about the molecular mechanisms underlying development and regeneration in biological organisms. However, an overarching concept – that can predict the emergence of form and the robust maintenance of complex anatomy – is largely missing in the field. Classic (i.e., dynamic systems and analytical mechanics) approaches such as least action principles are difficult to use when characterizing open, far-from equilibrium systems that predominate in Biology. Similar issues arise in neuroscience when trying to understand neuronal dynamics from first principles. In this (neurobiology) setting, a variational free energy principle has emerged based upon a formulation of self-organization in terms of (active) Bayesian inference. The free energy principle has recently been applied to biological self-organization beyond the neurosciences [6], [7]. For biological processes that underwrite development or regeneration, the Bayesian inference framework treats cells as information processing agents, where the driving force behind morphogenesis is the maximization of a cell's model evidence. This is realized by the appropriate expression of receptors and other signals that correspond to the cell's internal (i.e., generative) model of what type of receptors and other signals it should express. The emerging field of the free energy principle in pattern formation provides an essential quantitative formalism for understanding cellular decision-making in the context of embryogenesis, regeneration, and cancer suppression. In this paper, we derive the mathematics behind Bayesian inference – as understood in this framework – and use simulations to show that the formalism can reproduce experimental, top-down manipulations of complex morphogenesis. First, we illustrate this ‘first principle’ approach to morphogenesis through simulated alterations of anterior-posterior axial polarity (i.e., the induction of two heads or two tails) as in planarian regeneration. Then, we consider aberrant signaling and functional behavior of a single cell within a cellular ensemble – as a first step in carcinogenesis as false ‘beliefs’ about what a cell should ‘sense’ and ‘do’. We further show that simple modifications of the inference process can cause – and rescue – mis-patterning of developmental and regenerative events without changing the implicit generative model of a cell as specified, for example, by its DNA. This formalism offers a new road map for understanding developmental change in evolution and for designing new interventions in regenerative medicine settings.
Article
Biological networks use collective oscillations for information processing tasks. In particular, oscillatory membrane potentials have been observed in non-excitable cells and bacterial communities where specific ion channel proteins contribute to the bioelectric coordination of large populations. We aim at describing theoretically the oscillatory spatio-temporal patterns that emerge at the multicellular level from the single-cell bioelectric dynamics. To this end, we focus on two key questions: (i) what single-cell properties are relevant to multicellular behavior and (ii) what properties defined at the multicellular level can allow an external control of the bioelectric dynamics. In particular, we explore the interplay between transcriptional and translational dynamics and membrane-potential dynamics in a model multicellular ensemble, describe the spatio-temporal patterns that arise when the average electric potential allows groups of cells act as a coordinated multicellular patch, and characterize the resulting synchronization phenomena. The simulations concern bioelectric networks and collective communication across different scales based on oscillatory and synchronization phenomena, thus shedding light on the physiological dynamics of a wide range of endogenous contexts across embryogenesis and regeneration.
Article
We employ numerical simulations and finite-size scaling techniques to investigate the properties of the dynamic phase transition that is encountered in the Blume-Capel model subjected to a periodically oscillating magnetic field. We mainly focus on the study of the two-dimensional system for various values of the crystal-field coupling in the second-order transition regime. Our results indicate that the present non-equilibrium phase transition belongs to the universality class of the equilibrium Ising model and allow us to construct a dynamic phase diagram, in analogy to the equilibrium case, at least for the range of parameters considered. Finally, we present some complementary results for the three-dimensional model, where again the obtained estimates for the critical exponents fall into the universality class of the corresponding three-dimensional equilibrium Ising ferromagnet.
Book
Stuart Kauffman here presents a brilliant new paradigm for evolutionary biology, one that extends the basic concepts of Darwinian evolution to accommodate recent findings and perspectives from the fields of biology, physics, chemistry and mathematics. The book drives to the heart of the exciting debate on the origins of life and maintenance of order in complex biological systems. It focuses on the concept of self-organization: the spontaneous emergence of order widely observed throughout nature. Kauffman here argues that self-organization plays an important role in the emergence of life itself and may play as fundamental a role in shaping life's subsequent evolution as does the Darwinian process of natural selection. Yet until now no systematic effort has been made to incorporate the concept of self-organization into evolutionary theory. The construction requirements which permit complex systems to adapt remain poorly understood, as is the extent to which selection itself can yield systems able to adapt more successfully. This book explores these themes. It shows how complex systems, contrary to expectations, can spontaneously exhibit stunning degrees of order, and how this order, in turn, is essential for understanding the emergence and development of life on Earth. Topics include the new biotechnology of applied molecular evolution, with its important implications for developing new drugs and vaccines; the balance between order and chaos observed in many naturally occurring systems; new insights concerning the predictive power of statistical mechanics in biology; and other major issues. Indeed, the approaches investigated here may prove to be the new center around which biological science itself will evolve. The work is written for all those interested in the cutting edge of research in the life sciences.