ArticlePDF Available

Modeling Emerging Interpersonal Synchrony and Its Related Adaptive Short-Term Affiliation and Long-Term Bonding: A Second-Order Multi-Adaptive Neural Agent Model

Authors:

Abstract

When people interact, their behaviour tends to become synchronized, a mutual coordination process that fosters short-term adaptations, like increased affiliation, and long-term adaptations, like increased bonding. This paper addresses for the first time how such short-term and long-term adaptivity induced by synchronization can be modeled computationally by a second-order multi-adaptive neural agent model. It addresses movement, affect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The behaviour of the introduced neural agent model was evaluated in a simulation paradigm with different stimuli and communication enabling conditions. Moreover, in this paper, mathematical analysis is also addressed for adaptive network models and their positioning within the landscape of adaptive dynamical systems. The first type of analysis addressed shows that any smooth adaptive dynamical system has a canonical representation by a self-modeling network. This implies theoretically that the self-modeling network format is widely applicable, which also has been found in many practical applications using this approach. Furthermore, stationary point and equilibrium analysis was addressed and applied to the introduced self-modeling network model. It was used to obtain verification of the model providing evidence that the implemented model is correct with respect to its design specifications.
Modeling Emerging Interpersonal Synchrony and its Related
Adaptive Short-Term A±liation and Long-Term Bonding: A
Second-Order Multi-Adaptive Neural Agent Model
Sophie C. F. Hendrikse
*
,
,
§
,
**
, Jan Treur
,
and Sander L. Koole
*
,
||
*
Amsterdam Emotion Regulation Lab, Department of Clinical Psychology
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
Methodology and Statistics Research Unit, Institute of Psychology
Leiden University, Leiden, The Netherlands
Social AI Group, Department of Computer Science
Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
§
s.c.f.hendrikse@uva.nl
j.treur@vu.nl
||
s.l.koole@vu.nl
Accepted 26 April 2023
Published Online 28 June 2023
When people interact, their behavior tends to become synchronized, a mutual coordination process that
fosters short-term adaptations, like increased a±liation, and long-term adaptations, like increased bonding.
This paper addresses for the ¯rst time how such short-term and long-term adaptivity induced by synchro-
nization can be modeled computationally by a second-order multi-adaptive neural agent model. It addresses
movement, a®ect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The
behavior of the introduced neural agent model was evaluated in a simulation paradigm with di®erent stimuli
and communication-enabling conditions. Moreover, in this paper, mathematical analysis is also addressed for
adaptive network models and their positioning within the landscape of adaptive dynamical systems. The ¯rst
type of analysis addressed shows that any smooth adaptive dynamical system has a canonical representation
by a self-modeling network. This implies theoretically that the self-modeling network format is widely ap-
plicable, which also has been found in many practical applications using this approach. Furthermore, sta-
tionary point and equilibrium analysis was addressed and applied to the introduced self-modeling network
model. It was used to obtain veri¯cation of the model providing evidence that the implemented model is
correct with respect to its design speci¯cations.
Keywords: Multi-adaptive neural agent model; social interaction; interpersonal synchrony; intrapersonal
synchrony; short-term a±liation; long-term bonding.
1. Introduction
Whenever people interact, their behavior tends to
become mutually coordinated in time, or synchro-
nized. Interpersonal synchrony has been found to
enhance relationship functioning, for example, by
inducing greater levels of closeness, concentration,
coordination, cooperation, a±liation, alliance, con-
nection, or bonding.
110
In literature
11,12
it was
suggested that to model the complex, cyclical types
of dynamics that occur, a dynamical systems
**
Corrsponding author.
This is an Open Access article published by World Scienti¯c Publishing Company. It is distributed under the terms of the
Creative Commons Attribution 4.0 (CC BY) License which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
OPEN ACCESS
International Journal of Neural Systems, Vol. 33, No. 7 (2023) 2350038 (41 pages)
#
.
cThe Author(s)
DOI: 10.1142/S0129065723500387
2350038-1
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
modeling approach is needed. Notably, the bene¯ts
of interpersonal synchrony include patterns of mu-
tual adaptation both in the short term and in
the long term. For instance, in the context of psy-
chotherapy, a patient and therapist who synchronize
their movements
8
may experience a stronger sense of
sharing the present moment during a therapeutic
session.
13
Over multiple sessions, this increased social
presence may strengthen the therapeutic bond,
which allows the patient and therapist to work to-
gether more e®ectively.
14
This paper has as main goal to address compu-
tational and mathematical analyses of the complex
adaptive dynamics of such forms of short-term and
long-term adaptivity of interaction behavior related
to interpersonal synchronization and to verify the
hypothesis that the underlying mechanisms put for-
ward in the literature indeed generate these social
phenomena by an emerging and adaptive interactive
process. More speci¯cally, these analyses cover three
di®erent but closely related levels:
.Analysis of mechanisms from the literature in
psychology and neuroscience that are suggested to
play a role in these complex multi-adaptive dy-
namics. These include mechanisms for di®erent
forms of (synaptic and nonsynaptic) plasticity and
control over plasticity by metaplasticity (also
called second-order plasticity or second-order ad-
aptivity). Here, the underlying hypothesis in the
literature is that these mechanisms put forward
are su±cient and able to generate the emerging and
adaptive patterns of synchronization and adapta-
tion of the interaction behavior. This hypothesis is
tested in silico in this paper by computational
simulation based on these mechanisms.
.Analysis of mathematical formalization of these
mechanisms behind the considered complex
adaptive dynamics by an agent-based second-
order multi-adaptive dynamical system. This
covers analysis of conducted simulation experi-
ments based on such formalization and includes
analysis of stationary points and equilibria for the
occurring dynamics and adaptivity.
.Analysis of how the speci¯c representation of such
an adaptive dynamical system by self-modeling
networks used here is positioned in the wider
landscape of adaptive dynamical systems. It is
analyzed how the self-modeling network format
can be used to provide a canonical representation
for any smooth adaptive dynamical system, which
also covers most neural system models.
So, more speci¯cally, the neural agent model that is a
central focus here is an adaptive dynamical systems
model based on a number of mechanisms in the lit-
eratures on cognitive, behavioral, and a®ective neu-
roscience. A neural basis for short-term behavioral
adaptivity can be found in the recent work on the
(nonsynaptic, intrinsic) adaptive excitability of (neu-
ral) states.
1518
By contrast, a neural basis for long-
term adaptivity can be found in the classical notion of
synaptic plasticity.
1922
Together, these two funda-
mentally di®erent forms of adaptation yield a model of
a multi-adaptive neural agent. The two forms of ad-
aptation also interact with each other.
The extent of adaptation that an agent requires
may vary from situation to situation. The capacity to
adjust plasticity to the demands of the situation relates
to metaplasticity.
23,24
This model of a neural agent
models metaplasticity as a second-order form of plas-
ticity that controls plasticity in a context-sensitive
manner. The resulting model yields a second-order
multi-adaptive neural agent, which is human-like in
the sense that it incorporates an interplay of three
major mechanisms for adaptivity that according to the
neuroscienti¯c literature characterizes human agents.
Note that it is not claimed that the model is
human-like in the sense that it would cover the applied
neural mechanisms at a physiological level addressing
that level was not within the scope of this research.
Instead, these neural mechanisms were considered and
modeled in a more abstract manner at a functional
level. Investigating physiological scalability would be
another, next enterprise to be addressed.
This model of a neural agent further includes in-
trapersonal synchrony and interpersonal synchrony
and their links to short-term and long-term behav-
ioral adaptivity. To model the pathway from syn-
chrony patterns to this behavioral adaptivity, we
included both built-in intrapersonal synchrony and
interpersonal synchrony detectors. Here, intraper-
sonal synchrony means that within an agent, actions
for the di®erent modalities occur in a coordinated
manner. Interpersonal synchrony means that for
each modality, the actions of the two agents occur
in a coordinated manner. The addressed modalities
are movement, a®ect, and verbal modalities. We
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-2
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
included these three modalities because they have
each been shown to be in°uential in interpersonal
behavior.
14
We evaluated the neural agent model in a series of
simulation experiments for two agents with a setup
in which a number of stochastic circumstances were
covered in di®erent (time) episodes. The simulations
included not only episodes with a stochastic common
stimulus for the two agents, but also episodes with
di®erent stochastic stimuli for the agents. Moreover,
to analyze the role of communication, stochastic
circumstances were also included for episodes when
communication was enabled by the environment and
episodes when communication was not enabled.
Next, as part of further analysis of the self-
modeling network modeling approach it is shown
how any (smooth) adaptive dynamical system can be
modeled in a canonical way as a self-modeling net-
work model. In this way, any adaptive dynamical
system has its canonical representation as a self-
modeling network model and can be analyzed based
on this canonical representation. On the one hand,
this shows that the chosen modeling approach does
not introduce biases or limitations if adaptive dy-
namical systems are modeled using it. In particular,
it shows also that the approach generalizes most
common neural system models. On the other hand,
this was a basis to show how stationary point anal-
ysis and equilibrium analysis for adaptive dynamical
systems can be performed by using the self-modeling
network representation for the speci¯c adaptive
dynamical system model introduced here.
2. Main Assumptions and Background
Knowledge
In this section, we present the main assumptions
behind the introduced adaptive neural agent model
and relate them to the relevant neuroscience litera-
tures. This grounding in neuroscience is based on
pathways for a circular interplay of synchrony with
both nonsynaptic plasticity
16
and synaptic plas-
ticity,
1922
thereby covering both short-term time
scales and long-term time scales and their interac-
tion. More speci¯cally, the following underlying
assumptions are made for the pathways involved; for
a conceptual overview, see Fig. 1. Note that the main
example used in the presentation concerns two
agents A and B and their interaction.
2.1. Interpersonal synchrony leads to
adaptation of interaction behavior
Interpersonal synchrony is often followed by a be-
havioral change or adaptation of mutual
behavior.
110
This adaptive shift in mutual behav-
ioral coordination has been observed, for instance, in
psychotherapy sessions. Research has shown that
therapists were rated more favorably and as more
empathic when, beforehand, they were instructed to
make their movements more synchronized with the
Fig. 1. Conceptual overview of the processes involved in multimodal (intra- and interpersonal) synchrony and behavioral
adaptivity in social interaction.
Modeling Emerging Interpersonal Synchrony
2350038-3
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
client.
2527
Similarly, Ramseyer and Tschacher
8
found that initial movement synchrony between cli-
ent and therapist was predictive of the client's ex-
perience of the quality of the alliance at the end of
each session. Furthermore, Koole and Tschacher
14
reviewed converging evidence that movement syn-
chrony has a positive e®ect on the working alliance
between patient and therapist. More generally, syn-
chrony in face-to-face interactions has been found to
promote interpersonal a±liation.
10,28
2.2. Behavioral adaptation after
interpersonal synchrony occurs both in
the form of short-term adaptation and
long-term adaptation
Much research on interpersonal synchrony has
focused on short-term adaptive changes in interper-
sonal coordination.
1,6,9,10,29
However, several lines of
research have observed e®ects of interpersonal
synchrony on long-term adaptation as well. First,
developmental research has observed that movement
synchrony between infant and caregivers predicts
social interaction patterns of the child several years
later.
28
Second, research on close relationships sug-
gests that early patterns of interpersonal synchrony
predict subsequent indicators of relationship func-
tioning, for instance, one study found that spouses'
patterns of cortisol variation converged over a period
of years, indicating long-term shifts in interpersonal
coordination.
30
Third and last, research on psychotherapy pro-
cesses has found that markers of interpersonal syn-
chrony in early sessions can predict the development
of the therapeutic relationship
8
and therapeutic
outcomes.
7
Long-term adaptation processes remain
less well-studied than short-term adaptation pro-
cesses. Nevertheless, the convergence of evidence is
su±cient to conclude that interpersonal synchrony is
likely to promote both short-term and long-term
adaptation in interpersonal relationships.
2.3. The behavioral adaptation relies on
di®erent neural mechanisms: Synaptic
plasticity of connections and nonsynaptic
plasticity of intrinsic excitability
In the neuroscienti¯c literature, a distinction is made
between synaptic and nonsynaptic (intrinsic) adap-
tation. The classical notion of synaptic plasticity has
been used to explain long-term behavioral
adaptation.
1922
This addresses how the strength of
a connection between di®erent states is adapted over
time due to simultaneous activation of the connected
states. By contrast, the nonsynaptic adaptation of
intrinsic excitability of (neural) states has been
addressed in more detail more recently.
1518
The
latter form of adaptation has been related, for ex-
ample, to homeostatic regulation
17
and also to how
deviant dopamine levels during sleep make that
dreams can use more associations due to easier ex-
citable neurons.
31
Moreover, both (synaptic and
nonsynaptic) forms of adaptation can easily work
together.
32
In the neural agent model these two adaptation
mechanisms and their interaction have been used to
model behavioral adaptivity: the former for long-
term adaptation and the latter for short-term
adaptation. Here an interplay of two types of adap-
tivity occurs. Synchrony does not only lead to short-
term adaptation, but short-term adaptation itself
also intensi¯es interaction which can lead to more
synchrony which in turn can strengthen the long-
term adaptation. Besides, long-term adaptivity also
strengthens interaction which leads to more syn-
chrony and consequently stronger short-term
adaptivity. In this way, via multiple circular path-
ways a dynamic interplay occurs between synchrony,
short-term adaptivity and long-term adaptivity.
Plasticity is not a constant feature, as it often is
highly context-dependent according to what is called
metaplasticity.
23,24
For example, \adaptation accel-
erates with increasing stimulus exposure".
24
Toen-
able such context-sensitive control of plasticity,
second-order adaptation (i.e. adaptation of the ad-
aptation) has been included in the neural agent
model, which makes the model more realistic.
2.4. The pathways from synchrony to
behavioral adaptation involve
synchrony detection states
If synchrony occurs for an agent and due to this the
agent adapts the interaction behavior, this suggests
that agents possess a facility to notice or experience
synchrony patterns for the di®erent modalities. In-
deed, the assumption is made that agents do in some
way (perhaps unconsciously) detect synchrony and
from there may trigger behavioral adaptation for
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-4
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
their interaction behavior. In the pathway from
synchrony patterns to changed interaction behavior
patterns, such synchrony detection states can be
considered as speci¯c mediating mental states. Such
a state pin general has been called a mediating state
for the e®ect of a past pattern aon a future pattern b
entailed by pattern a(Refs. 33 and 34); similarly,
such a (brain) state is referred to as describing
\informational criteria" for future activation.
35,36
In
line with previous research,
37
it is assumed that not
only the detected interpersonal synchrony but also
the detected intrapersonal synchrony relating to a
conscious emotion has a causal e®ect on the behav-
ioral adaptivity.
3. Self-Modeling Network Modeling
The presented neural agent model is an adaptive
dynamical system model designed and speci¯ed
based on network-oriented modeling. The network-
oriented modeling approach used here is basically a
causal network modeling approach where nodes
model states that have activation values that change
over time and connections between these states
model causal relations that have their e®ects in a
temporal, dynamic manner on the state activations.
Thus, dynamical systems are modeled. Moreover, by
enabling these causal relations and characteristics
for their e®ects on state activations to change as
well, also adaptive dynamical systems are covered.
This is also done in a network-oriented manner,
using a so-called self-modeling network architecture.
In Sec. 6, it will be shown that any smooth adaptive
dynamical system can be modeled in this way.
In this section, this modeling approach is brie°y
introduced.
Following the network-oriented modeling
approach
3840
used here, a temporalcausal network
model is characterized by (here Xand Ydenote
nodes of the network, also called states, and XðtÞand
YðtÞdenote their activation values at time t):
.Connectivity characteristics
Connections from a state Xto a state Yand their
weights ooX;Y.
.Aggregation characteristics
For any state Y, some combination function
cppY;Y(V1;...;Vk) with vector of parameter values
ppY¼ðpp1;Y;...;ppm;YÞde¯nes the aggregation that
is applied to the impacts Vi¼ooXi;YXiðtÞon Y
from its incoming connections from states Xi.
.Timing characteristics
Each state Yhas a speed factor ZZYde¯ning how
fast it changes for given causal impact.
Note that for the sake of notational simplicity, often
cppY;Ywill be denoted by cY, omitting the subscript
ppY; this will not mean that there are no parameters,
they are just left implicit.
These network characteristics ooX;Y,cppY;Y,ppY,
and ZZYfor a given network model serve as a (formal)
design speci¯cation of this network model. The fol-
lowing canonical di®erence and di®erential equa-
tions for temporalcausal network models used for
simulation and analysis of such network models, in-
corporate these network characteristics ooX;Y,cY,
ppi;Y, and ZZYin a standard numerical format:
YðtþtÞ¼YðtÞþZZY½cppY;YðooX1;YX1ðtÞ;...;
ooXk;YXkðtÞÞ YðtÞt;
dY ðtÞ
dt ¼ZZY½cppY;YðooX1;YX1ðtÞ;...;
ooXk;YXkðtÞÞ YðtÞ
ð1Þ
for any state Yand where X1to Xkare the states
from which Ygets its incoming connections. Note
that (1) has a format similar to that of recurrent
neural networks. Within the dedicated software en-
vironment implemented in MATLAB, a large num-
ber of currently around 60 useful basic combination
functions are included in a combination function
library.
39
The above concepts enable us to design network
models and their dynamics in a declarative manner,
based on mathematically de¯ned functions and
relations and speci¯ed in a standard table format
covering all network characteristics (called role ma-
trices, see Appendix B). The examples of combination
functions that are applied in the model introduced
here can be found in Table 1. Here, for the third and
fourth function, rand(1, 1) draws a random number
from [0, 1] in a uniform manner and ais a persistence
factor (with value 0.5 used in the simulations).
Realistic network models are usually adaptive:
often not only their states but also some of their
network characteristics change over time. By using a
self-modeling network (also called a rei¯ed network),
a similar network-oriented conceptualization can
Modeling Emerging Interpersonal Synchrony
2350038-5
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
also be applied to adaptive networks to obtain a
declarative description using mathematically de¯ned
functions and relations for them as well.
39,40
This
works through the addition of new states to the
network (called self-model states) which represent
(adaptive) network characteristics. In the graphical
three-dimensional (3D)-format as shown in Sec. 4,
such additional states are depicted at a next level
(called self-model level or rei¯cation level), where the
original network is at the base level.
As an example, the weight ooX;Yof a connection
from state Xto state Ycan be represented (at a next
self-model level) by a (connectivity) self-model state
named WX;Y, which can be used to model synaptic
plasticity.
1922
Similarly, all other network char-
acteristics from ooX;Y,cY(...) and ZZYcan be made
adaptive by including self-model states for them. For
example, for adaptive excitability
1517
the threshold
Y(for a logistic combination function) for a state Y
can be represented by a (aggregation) self-model
state named TYand an adaptive speed factor ZZYcan
be represented by a (timing) self-model state named
HY. Dynamics for the activation values for these self-
model states are modeled by adding their own net-
work characteristics (thus integrating them in the
network structure) and applying Eq. (1) for them.
If for all network characteristics oo,pp,ZZ for all
base level states, respective self-model states W,P,
Hare introduced representing these network
characteristics, then the canonical di®erence and
di®erential equations for the base level states of the
self-modeling network model are
YðtþtÞ¼YðtÞþHYðtÞ½cPYðtÞ;YðWX1;YðtÞX1ðtÞ;...;
WXk;YðtÞXkðtÞÞ YðtÞt;
dY ðtÞ=dt¼HYðtÞ½cPYðtÞ;YðWX1;YðtÞX1ðtÞ;...;
WXk;YðtÞXkðtÞÞ YðtÞ;
ð2Þ
where PYðtÞ¼ðP1;YðtÞ;...;Pm;YðtÞÞ.
This canonical di®erence equation is incorporated
in the dedicated software environment. By instanti-
ating this general di®erence equation (2) by proper
values for the network characteristics for all base
states Yand similarly instantiating equation (1) for
all self-model states, the software environment runs a
system of ndi®erence equations where nis the
number of (base and self-model) states in the
network.
When Eqs. (2) are compared to Eqs. (1), it can be
noticed that at each point in time t, for the value of
each network characteristic the activation value of
its corresponding self-model state is used: for ZZYthe
value HYðtÞis used, for ooXi;Ythe value WXi;YðtÞ,
etc. In this way, each of these self-model states is
assigned the functional role of the speci¯c network
characteristic it represents. In particular, when the
activation values of these W-states, P-states, and
H-states change, the corresponding network char-
acteristics change accordingly. This makes these
network characteristics adaptive.
Table 1. The combination functions used in the introduced network model.
Notation Speci¯cation Parameters Used for
Advanced logistic
sum
alogistic
σσ,ττ
(V1;...;Vk)½1
1þessðV1þþVkssÞ1
1þesstt ð1þe
σσττ
)π
1
: steepness σσ
π
2
: excitability
threshold ττ
X4X5,X10X16 ,
X24X26 ,X31-X38 ,
X45X47 ,X54X59 ,
X63X71 ,X75X93
Complemental
di®erence
compdi® (V1,V2)0ifV1¼V2¼0
1jV1V2j
maxðV1;V2Þelse
none X18X23 ,X39X44
(synchrony detectors)
Random Stepmod randstepmod
ρρ,δδ
(V) 0 if 0 time tmod ρρδδ
aV þð1aÞrandð1;1Þelse
π
1
: repetition ρρ
π
2
: step time δδ
X3(common stimulus)
X60X62 ,X72X74
(communication
enablers)
Random
Stepmodopp
randstepmodopp
ρρ,δδ
(V)0ifδδtime tmod ρρρρ
aV þð1aÞrandð1;1Þelse
π
1
: repetition ρρ
π
2
: step time δδ
X1X2(individual
stimuli)
Euclidean eucln;¸(V1;...;Vk)ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Vn
1þþVn
k
¸
n
qπ
1
: order n
π
2
: scaling
factor ¸
X6X9,X27X30
(sensing)
X48X53
(communication)
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-6
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
More mathematical background of this self-
modeling network architecture construction and
Eqs. (2) has been described elsewhere.
39
Usually notfor all network characteristicsself-model
states are introduced but only for part of them. For
example, in case only self-model states Pfor the com-
bination function parameters pp are introduced, then the
canonical di®erence and di®erential equation are
YðtþtÞ¼YðtÞþZZY½cPYðtÞ;YðooX1;YX1ðtÞ;...;
ooXk;YXkðtÞÞ YðtÞt;
dY ðtÞ=dt ¼ZZY½cPYðtÞ;YðooX1;YX1ðtÞ;...;
ooXk;YXkðtÞÞ YðtÞ;
ð3Þ
where PYðtÞ¼ðP1;YðtÞ;...;Pm;YðtÞÞ. This speci¯c
case will come back in the mathematical analysis
addressed in Sec. 6.2 (to establish Theorem 2there).
Note that di®erence and di®erential equations (2)
are not exactly in the standard format of a tempor-
alcausal network, as HYis not a constant speed
factor and also the P- and W-values are not con-
stant. However, it can be rewritten into the tem-
poralcausal network format when the following
combination function c
Yð::Þis de¯ned:
c
YðH;P1;...;Pm;W1;...;Wk;V1;...;Vk;VÞ
¼HcP;YðW1V1;...;WkVkÞþð1HÞV;ð4Þ
where P¼ðP1;...;PmÞ.
Based on this combination function, consider the
following di®erence equation:
YðtþtÞ¼YðtÞþ½c
YðHYðtÞ;P1;YðtÞ;...;Pm;YðtÞ;
WX1;YðtÞ;...;WXk;YðtÞ;X1ðtÞ;...;
XkðtÞ;YðtÞÞ YðtÞt:ð5Þ
This is indeed in temporalcausal network for-
mat (1) (with speed factor 1). Now note that using
(4), Eq. (5) can be rewritten as follows:
YðtþtÞ¼YðtÞþ½HYðtÞcPYðtÞ;YðWX1;YðtÞX1ðtÞ;
...;WXk;YðtÞXkðtÞÞþð1HYðtÞÞYðtÞ
YðtÞt
¼YðtÞþ½HYðtÞcPYðtÞ;YðWX1;YðtÞX1ðtÞ;
...;WXk;YðtÞXkðtÞÞ HYðtÞYðtÞt
¼YðtÞþHYðtÞ½cPYðtÞ;YðWX1;YðtÞX1ðtÞ;
...;WXk;YðtÞXkðtÞÞ YðtÞt;
ð6Þ
where PYðtÞ¼ðP1;YðtÞ;...;Pm;YðtÞÞ.
Equation (6) shows exactly di®erence equation
(2) above; this con¯rms that the chosen combination
function c
Yð::Þin (4) to show that the self-modeling
network has a temporalcausal network format (1)
works.
As the outcome of a process of network rei¯cation
is also a temporalcausal network model itself, as
has been shown above, this self-modeling network
construction can easily be applied iteratively to ob-
tain multiple orders of self-models at multiple (¯rst-
order, second-order, etc.) self-model levels. For ex-
ample, a second-order self-model may include a
second-order (timing) self-model state HWX;Yrepre-
senting the speed factor ZZWX;Yfor the dynamics of
¯rst-order self-model state WX;Ywhich in turn
represents the adaptation of connection weight ooX;Y.
Similarly, a second-order self-model may include a
second-order (timing) self-model state HTYrepre-
senting the speed factor ZZTYfor the dynamics of ¯rst-
order self-model state TYwhich in turn represents the
adaptation of excitability threshold ττYfor Y.
In this paper, this multi-level self-modeling net-
work modeling perspective will be applied to obtain a
second-order adaptive network architecture addres-
sing controlled behavioral adaptation induced by
detected synchrony. In this self-modeling network
architecture, the ¯rst-order self-model models the
adaptation of the base level network that models
behavior, and the second-order self-model level the
control over this adaptation. As an example, the
control level can be used to make the adaptation
speed context-sensitive as addressed by metaplasti-
city literature.
23,24
For instance, the metaplasticity
principle \adaptation accelerates with increasing
stimulus exposure"
24
formulated by Robinson et al.
can easily be modeled by using second-order self-
model states; this actually has been done for the in-
troduced model, as will be discussed in Sec. 4.
4. The Adaptive Neural Agent Model
In this section, our adaptive neural agent model is
explained in some detail. The controlled adaptive
agent design uses a self-modeling network architec-
ture of three levels as discussed in Sec. 3: a base level,
a ¯rst-order self-model level, and a second-order self-
model level. Here the (middle) ¯rst-order self-model
level models how connections and excitability
thresholds of the base level are adapted over time,
Modeling Emerging Interpersonal Synchrony
2350038-7
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
and the (upper) second-order self-model level models
the context-sensitive control over the adaptations.
Appendix B provides explanations for all of its states
and a full speci¯cation of the model.
4.1. Base level
Figure 2shows a graphic overview of the base
level. For each agent, interaction states were
modeled: states involved in sensing (indicated by
sense) are on the left-hand side of each box, and
states involved in execution or expression of actions
(move, exp a®ect, talk) on the right-hand side. In
between these interaction states, within a box are the
agent's internal mental states; outside the boxes are
the world states. Note that we assume that each
agent also senses its own actions, modeled by the
arrows from right to left outside the box.
Fig. 2. (Color online) Base level of the introduced adaptive agent model (upper picture) with three modalities and (in dark
pink) six synchrony detection states for intrapersonal and interpersonal synchrony and how the agents interact (lower picture)
according to the three modalities.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-8
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
For each agent, we modeled a few internal mental
states such as sensory representation states (rep) and
preparation states (prep) for each of the three mo-
dalities: movement m, expression of a®ect b, and
verbal action v.
Furthermore, each agent has a conscious emotion
state for a®ective response b(cons emotion). Each of
the mentioned states is depicted in Fig. 2by a light
pink circle shape. For each modality, its representa-
tion state has an outgoing (response) connection to
the corresponding preparation state and it has
an incoming (prediction) connection back from
the preparation state to model internal mental
simulation.
41,42
Finally, there are the six synchrony detector
states (depicted in Fig. 2by the darker pink diamond
shapes) which are introduced here. As in previous
research
37
we cover three intrapersonal synchrony
detection states for the three pairs of the three
modalities:
movementemotion (mb),
movementverbal action (mv),
emotionverbal action (bv).
These intrapersonal synchrony detection states have
incoming connections from the two execution states
for the modalities they address. The conscious emo-
tion state is triggered by incoming connections from
the preparation state for a®ective response btogether
with the three intrapersonal synchrony detection
states.
43
In addition, the conscious emotion state has
an incoming connection from the verbal action exe-
cution state (for noticing the emotion in the verbal
utterance) and an outgoing connection to the prep-
aration of the verbal action (for emotion integration
in the verbal action preparation).
There are three interpersonal synchrony detection
states for the three modalities m,b, and v. Each of
them has two incoming connections: from the sensing
state (representing the action of the other agent) and
the execution state (representing the own action) of
the modality addressed.
For a few states and connections, their excitabil-
ity and connection weights are adaptive depending
on detected synchrony: detected synchrony leads to
becoming more sensitive to sensing an agent and
expressing to that agent (short-term e®ect) and to
connecting stronger to the agent (long-term e®ect).
Here, two di®erent time scales for the adaptations
are considered:
.On the short term, enhancing the excitability of
such internal states, so that they become more
responsive or sensitive (a form of instantaneous
homeostatic regulation).
.On the long term, making the weights of such
connections stronger so that propagation between
states is strengthened (a form of a more endurable
bonding).
This applies to two types of states and four types of
connections in particular, all playing an important
role in the interaction behavior of the two agents:
.Short-term adaptive excitability for
internal states
The representation states for each of the three
modalities.
The execution states for each of the three
modalities.
.Long-term adaptive internal and external
connections
The (representing) connections from sensing
to representation states for each of the three
modalities.
The (executing) connections from preparation to
execution states for each of the three modalities.
The (observing) connections from world states
to sensing states.
The (e®ectuating) connections from execution
states to world states.
Thus, more synchrony detected will lead to enhanced
excitability for these types of states (short-term ad-
aptation) and for these connections to become
stronger (long-term adaptation); each type of all
these adaptations contributes in its own way (and
time scale) to the interaction behavior of the agents.
In the short term, more sensitive states for repre-
sentations will lead to gaining better images of the
modalities of the other agent; this will make the
sensed signals better available and accessible for
the agent. More sensitive states for execution will
lead to better expressed own modalities, so that the
other agent can sense them better.
Over time and repeated interactions, a stronger
(external) observing connection will lead to sensing
Modeling Emerging Interpersonal Synchrony
2350038-9
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
the other agent better (e.g. turning sensors in the
right direction and bending or getting closer to the
other), and a stronger representing connection again
(but now in a more endurable manner) will make the
sensed signals better available and accessible for the
agent. Conversely, a stronger executing connection
will also contribute (in an endurable manner) to
stronger expression and acting toward the other and
a stronger e®ectuating connection to better avail-
ability (for the other) of the action e®ects in the
world (e.g. more visible, better hearable by directing
and positioning in the right direction, and bending or
getting closer to the other). In Sec. 4.2, we discuss in
more detail how we modeled these forms of adap-
tivity and their control using the principle of self-
modeling of the network model.
Finally, at the base level some world states are
modeled for stimuli sthat are sensed by the agents.
In the simulations, they have stochastic activation
levels. In some episodes one common stimulus is
observed by both agents (for example when they
physically meet and therefore are in the same envi-
ronment), but in other episodes the agents receive
di®erent stimuli. Furthermore, also the world situa-
tion's suitability for enabling communication be-
tween the two agents is modeled by similar
stochastic °uctuations. Moreover, two context states
are included to model the conditions to maintain
excitability thresholds well.
4.2. Modeling adaptation and its control
We modeled adaptation and its control needed in the
neural agent model using a self-modeling net-
work
39,40
; see also Sec. 3. Following what has been
described in Sec. 4.1, for a number of states Y
adaptive excitability has been modeled via the ex-
citability threshold ττYof the logistic function used
for these states (see Table 1). Moreover, the
strengthening of connections from Xto Yhas been
modeled via adaptive connection weights ooX;Y. Fol-
lowing Sec. 3, these adaptations have been modeled
in particular through self-modeling for these ττYand
ooX;Yby adding the following ¯rst-and second-order
self-model states:
.First-order self-model T-states TYare used
for short-term adaptation of the adaptive base
excitability thresholds ττYfor the internal
representation states and execution states Yfor
the three considered modalities (movement, af-
fective response, and verbal action). For each
agent there are six of these T-states, both for the
three representation base states and the three ex-
ecution base states (both for the three modalities).
.First-order self-model W-states WX;Yare used for
adaptation of the adaptive base connection
weights ooX;Yfor both internal and external con-
nections for the three considered modalities; in-
ternal connections at the base level from sense
states to representation states and from prepara-
tion states to execution states, and external con-
nections from execution states to world states and
from world states to sense states. For each agent
there are 12 of these W-states, for the connections
from world states to sensor states, from sensor
states to representation states, from representa-
tion states to execution states, and from execution
states to world states (all for the three modalities).
.Second-order self-model HT-states are used for
control of the T-states for adaptation of the
adaptive excitability thresholds ττYfor the internal
representation states and execution states Y. For
each agent there is one of these states.
.Second-order self-model HW-states are used for
control of the W-states for the adaptation of the
adaptive base connection weights ooX;Y. For each
agent there is one of these states.
Figure 3shows the overall design of the network
model; here, the ¯rst-order self-model states are in
the middle (blue) plane and the second-order self-
model states in the upper (purple) plane. The ¯rst-
order states include T-states representing the excit-
ability thresholds of representation and execution
states and W-states representing the weights of the
di®erent types of adaptive connections addressed.
Recall from Sec. 3the canonical di®erence and
di®erential equation (2) for a self-modeling network.
In this equation it is shown that at each time point
for the values of the network characteristics the
values of these self-model T-states and W-states are
used. Based on this equation, by changing the acti-
vation values of these T-states and W-states over
time t, the corresponding excitability thresholds and
connection weights change accordingly which makes
them adaptive. Such change of the values of the
T-states and W-states occurs due to the in°uences
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-10
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
from the detected synchronies, modeled by the
upward (blue) arrows in Fig. 3from the synchrony
detection states in the base plane to the T-states and
W-states in the middle plane.
For most states the combination function alo-
gistic is used which has an excitability threshold
parameter that can be made adaptive; see the last
column in Table 1. The synchrony detections states,
however, have a di®erent function called compdi®
to measure the extent of synchrony. For further
details, see Table 1and Appendix B.
There are four second-order self-model states to
control the adaptation: two second-order self-model
states HTAand HTBfor excitability adaptation
control, one for each agent, and two second-order
self-model states HWAand HWBfor connection
weight adaptation control, also one for each agent.
These second-order self-model states are used to
represent the adaptation speed (learning rate) for the
adaptive excitability threshold T-states and con-
nection weight W-states for the concerning agents A
and B. Based on the canonical di®erence and
di®erential equation (2) for a self-modeling network
from Sec. 3, for each time point tthe activation
values of the second-order self-model states HTA,
HTB,HWA, and HWBat tare used as the values for
these network characteristics of the ¯rst-order self-
model T-states and W-states for Aand B. These
second-order self-model states HTA,HTB,HWA,and
HWBmodel the second-order adaptation (or meta-
plasticity) principle \adaptation accelerates with
increasing stimulus exposure".
24
To this end they
have incoming connections (blue arrows from base
plane to upper plane) from the stimulus representa-
tion states at the base level.
5. Simulation Results
Appendix Bprovides a full speci¯cation of the model
as used in our simulations. As can be seen, in general
the values have been chosen in a standard manner.
For example, all positive connection weights are 1,
except for the long-term adaptation speed self-model
states HWAand HWBwhich are 0.01, see Table B.4.
Fig. 3. Overview of the overall second-order adaptive network model.
Modeling Emerging Interpersonal Synchrony
2350038-11
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
The negative connection weights for the T-states are
0.12 so that they together add up to a value 0.72,
which compensates the positive weight 1 enough to
get e®ect. The values for the steepness parameter
σσ (see Table B.6) for the combination function
alogistic were all set on 5 which also is a kind of
default value. For values of the threshold parameter
ττ, they often have been set at 0.5 but accordingly
higher for states with multiple incoming connections.
For all states, when the Euclidean combination
function eucl is used, the order parameter nis 1
which makes it linear and the scaling factor is the
sum of the weights of the incoming connections
which normalizes it. Note that the time unit is kept
abstract. Depending on the application context one
might think of minutes, for example for therapeutical
sessions.
5.1. Design of the simulation experiments
In this section, we evaluate our neural agent model in
an experimental simulation paradigm. Our paradigm
was set up in such a way that we could evaluate the
behavior of our two agents during four di®erent
types of consecutive episodes (see Table 2and Fig. 4)
which are explained as follows. Each of these types of
episodes lasts for 30 time units, so that a cycle of four
episodes equals 120 time units. Our total simulation
run had a duration of 840 time units and the step size
(t) was 0.5, resulting in 1680 computational steps
in total for each simulation run. This means that
each cycle of four episodes was repeated seven times
in each simulation. As it concerns a partly stochastic
simulation, we ran 20 repetitions of each simulation
with the same episodic paradigm and parameter
settings, to get a sense of the robustness of the
neural agent model's behavior. It turned out that
general patterns were approximately similar across
all independent simulations. Therefore, we selected
one simulation to discuss in the upcoming
subsections.
Regarding the four di®erent types of episodes in
this simulation, they manipulate both whether or not
the two agents received the same or a di®erent sto-
chastic stimulus and whether or not they were able
to communicate (with some stochastic variations in
enabling conditions, due to environmental changes
and noise) with each other (Table 2). The speci¯c
episodes for the considered example simulation are
shown in Fig. 4.
The world states wss;Aand wss;Bindicate the
di®erent stimuli for agent Aand Bfrom the world
(activated from time 0 to 60 and then repeated every
120 time units; see the dark solid and dashed blue
lines for A, respectively B). Similarly, world state
wssindicates the common stimulus (activated from
time 60 to 120 and then repeated every 120 time
units; see the purple line). These three states have
values stochastically °uctuating approximately be-
tween 0.7 and 0.9. Furthermore, the self-model
states Wexecwsx;A;B(from Ato B) and the states
Wexecwsx;B;A(from Bto A) indicate the communi-
cation-enabling conditions in the environment. They
are activated from time 30 to time 60 thereby °uc-
tuating stochastically roughly between 0.45 and 0.65
and then repeated every 60 time units.
Table 2. Simulation paradigm of each run with the neural agent model: the pattern of stimuli and communication enables
repeating every 120 time units.
Time Episode Di®erent stimuli wss;Awss;BCommon stimulus wss
Communication
enabled Wexecwsx;A;BWexecwsx;B;A
030 Episode 1 Yes No No
3060 Episode 2 Yes No Yes
6090 Episode 3 No Yes No
90120 Episode 4 No Yes Yes
120150 Episode 5 Yes No No
150180 Episode 6 Yes No Yes
180210 Episode 7 No Yes No
210240 Episode 8 No Yes Yes
240270 Episode 9 etc. etc. etc.
etc. etc. etc. etc. etc.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-12
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
All these stochastic activation patterns indeed
follow the pattern shown in Table 2with repetition
every 120 time units until end time 840.
5.2. Behavior of the base states of the
neural agent model
For the base states, in the ¯rst phase for time 0 to 10
the representations (states reps;Aand reps;B) for the
stimulus are activated (the curves °uctuating
around 0.8) and preparations (states prepx;A) for
actions are triggered (curves going to 1); see the
upper graph in Fig. 5. This leads, together with
the intrapersonal synchrony detection activation
(see Figs. 5and 6), to the conscious emotion around
time 10 (red curve going to 1), but this still is only
internal processing as no executions of actions take
place yet. The action executions (states movem;A,
exp a®ectb;A, and talkA;B;v) for both agents start to
come up after time 10 (e.g. the purple line); this also
Fig. 4. The stimuli and interaction enabling states in the neural agent model.
Notes: From 0 to 120 time units (upper graph) and from 0 to 840 time units (lower graph): interaction enabling (multi-color) for 3060, 90120,
etc., di®erent stimuli (blue) of 060, 120180, etc., common stimulus (purple) of 60120, 180240, etc. (see also Table 2).
Modeling Emerging Interpersonal Synchrony
2350038-13
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
depends on the short-term adaptations that will be
discussed in Sec. 5.4. The curves immediately under
these executions concern the sensing of the other
agent's actions (the senseA;x;Band senseB;x;Astates);
in some periods they are slightly °uctuating due to
environmental noise on the communication channels.
The actual communication level (the wsx;A;Band
wsx;B;Astates) is seen below it from 30 to 60 and from
90 to 120.
For the longer term, the lower graph in Fig. 5
shows that each interval with enabling conditions for
communication leads to higher activations of the
action executions (the purple line) until values
around 0.8 are reached. This is due to a long-term
behavioral adaptation that is discussed in Sec. 5.5.
Accordingly, the sensing states become higher as well
over this longer term, but not as high as the action
executions, due to a communication bias incorpo-
rated in the model. This overall pattern shows that
the enabling conditions for communication have a
stronger adaptive e®ect on the actions than having a
common stimulus.
5.3. Behavior of the intrapersonal
synchrony and interpersonal
synchrony detector states
The curves that the graphs in Figs. 6and 7have in
common depict the detected intrapersonal synchrony
and interpersonal synchrony. Here:
.The detected intrapersonal synchrony detection is
represented by the states intrasyncdetA;xyand
Fig. 5. The base states in the neural agent model from 0 to 120 time units (upper graph) and from 0 to 840 time units (lower
graph). Due to the behavioral adaptivity the activation levels in response to the stimuli and interaction become stronger over
time, both on the short term (within each interaction enabling interval 3060, 90120, etc.) and on the long term.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-14
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
intrasyncdetB;xyshown as the light green and
light blue curves going to 1 from time 0 to 15.
.The interpersonal synchrony detection is repre-
sented by the states intersyncdetA;B;xand inter-
syncdetB;A;xshown as the red and blue curves
going to 0.4 from time 0 to 30 and further to 0.8
from time 30 to 60.
Here it can be observed that the detection of intra-
personal synchrony takes place already in the ¯rst
episode from time 0 to 30, meaning no common
stimulus or communication is required. In contrast,
the detection of interpersonal synchrony strongly
depends on the interaction between the two agents.
Note also that the former type of detected synchrony
reaches a perfect level of 1, due to the coherent in-
ternal makeup of the agents, while the latter
type does not get higher than around 0.8. At ¯rst
sight this may look strange, given that the
actual executions of actions of both agents are
practically the same, as discussed above (see Sec. 5.2
Fig. 6. The detected intrapersonal synchrony, interpersonal synchrony and short-term adaptation T-states in the neural agent
model from 0 to 120 time units (upper graph) and from 0 to 840 time units (lower graph). As a form of short-term behavioral
adaptivity, during each interaction interval the T-states become lower: 3060, 90120, etc. This makes in an adaptive manner
the thresholds of the base states lower and therefore the activation values of them higher within each of these intervals, as also
shows in Fig. 5.
Modeling Emerging Interpersonal Synchrony
2350038-15
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
and Fig. 5). However, this is due to the communi-
cation bias that was also noted above in Sec. 5.2.
This demonstrates the capability of the model that
it is able to distinguish a subjective personally
detected interpersonal synchrony from an objective
form of interpersonal synchrony detection as might
be assigned by an external observer but not by the
agent itself.
Fig. 7. The detected intrapersonal synchrony, interpersonal synchrony, and long-term adaptation W-states and HW-states of
their adaptation speed in the neural agent model from 0 to 120 time units (upper graph) and from 0 to 840 time units (middle
graph). The lower graph depicts the HW-states with a di®erent vertical scale (times 10 3).
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-16
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
5.4. The interplay between synchrony
and short-term adaptation
In Fig. 6, the synchrony detection states are shown
together with the states involved in the short-term
adaptation: the ¯rst-order self-model T-states that
represent the adaptive excitability thresholds for
representation and execution states and the second-
order self-model HT-states that represent the
T-states' speed factors (adaptive learning rates).
Except the synchrony detection states already dis-
cussed in Sec. 5.3, the graphs show two light green
curves °uctuating around 0.6 for the HT-states
and a blue curve going down to below 0.3 for the
T-states.
According to the metaplasticity principle
\adaptation accelerates with increasing stimulus
exposure",
24
the HT-states indeed °uctuate with
the stimuli. Furthermore, in accordance with this,
when one stimulus period is in transition to an-
other stimulus period, it can be seen that there is a
short dip in the values of the HT-states, as stimuli
start from 0, so there is a very short period of a
lower level, as can also be seen in Fig. 4.Moreover,
it is clear that the T-states (e.g. the blue curve)
show an opposite pattern compared to the pattern
of the interpersonal synchrony detection states. In
particular, in the episodes from 30 to 60 and from
90 to 120 (and so on), where the detected inter-
personal synchrony is the highest, the T-states for
the excitability thresholds are the lowest. This is a
short-term adaptation that makes that these
agent states related to the communication with
the other agent have a higher excitability due to
the detected interpersonal synchrony, which will
have an intensifying e®ect on their communica-
tion. Not coincidentally, the mentioned periods
are also the periods with good enabling conditions
for communication (see also Sec. 5.2). It can also
be noted that this tendency is a short-term e®ect
and is reversible: the T-states get higher again
when the detected interpersonal synchrony gets
lower.
5.5. The interplay between synchrony
and long-term adaptation
In Fig. 7the synchrony detection states are shown
together with the states involved in the long-term
adaptation: the ¯rst-order self-model W-states that
represent the adaptive weights for the connections to
the representation and execution states and the
second-order self-model HW-states that represent
the W-states' speed factors (adaptive learning
rates). Here, except the synchrony detection states
already discussed in Sec. 5.3, the graphs show the
W-states (e.g. a blue curve) slowly and gradually
going up to above 0.5 at time 120 and further up to
about 0.8 at time 840.
Moreover, at a very low level, the curves for the
HW-states can be seen. They also °uctuate accord-
ing to the metaplasticity principle \adaptation
accelerates with increasing stimulus exposure",
24
but
at a very low level around 0.005 (see the lower graph
in Fig. 7). Again, following the same principle, when
one stimulus period is in transition to another stim-
ulus period, it can be seen that there is a short dip in
the values of the HW-states. This happens because
stimuli start from 0, so there is a very short period
of a lower level (see Fig. 4). The pattern of the
W-states indeed shows a long-term adaptation ef-
fect. It highlights that they get a repeated boost in
the time intervals 3060, 90120, and so on, and
show a form of persistency. These boosts occur spe-
ci¯cally in these intervals for a reason. These inter-
vals are when there are communication-enabling
conditions and as discussed in Sec. 5.4 that induces
synchrony and the short-term adaptation via the T-
states, which in turn add to synchrony. Therefore,
these two e®ects are at the basis of these boosts for
the long-term adaptation. In this way, there is a form
of interaction between short-term and long-term
adaptation.
6. Modeling and Analysis of Adaptive
Dynamical Systems via their Canonical
Self-Modeling Network Representation
This section discusses how any smooth adaptive
dynamical system can be modeled by a self-modeling
network model. It is shown in particular that any
adaptive dynamical system has a canonical repre-
sentation as a self-modeling network de¯ned by
network characteristics for connectivity, aggrega-
tion, and timing. The network concepts of this ca-
nonical representation of an adaptive dynamical
system provide useful tools for formal analysis of the
Modeling Emerging Interpersonal Synchrony
2350038-17
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
dynamics of the adaptive dynamical system
addressed. With this idea in mind, equilibrium
analysis of self-modeling network models is addres-
sed. Dynamics in network models are described by
node states that change over time (for example, for
individuals' opinions, intentions, emotions, beliefs,
etc.). Such dynamics depend on network character-
istics for the connectivity between nodes, the aggre-
gation of impacts from di®erent nodes on a given
node, and the timing of the node activation
updates.
39,40
For example, whether within a well-connected
group in the end a common opinion, intention,
emotion or belief is reached (a common value for all
node states) depends on all these network char-
acteristics. Sometimes silent assumptions are made
about the aggregation and timing characteristics.
For timing characteristics, often it is silently as-
sumed that the nodes are updated in a synchronous
manner, although in application domains this as-
sumption is usually not ful¯lled. For aggregation, in
social network models usually linear functions are
applied, which means that it is often not investigated
how a variation of this choice of aggregation would
a®ect the dynamics.
In the modeling and analysis approach used in
this paper, a more diverse landscape is covered which
is not limited by the ¯xed conditions on connectivity,
aggregation or timing as are so often imposed. For
connectivity, both acyclic and cyclic networks are
covered here. For aggregation, both networks with
linear and nonlinear aggregation are considered and
for networks with nonlinear aggregation, networks
with logistic aggregation are addressed but also
networks with other forms of nonlinear aggregation.
Finally, both synchronous and asynchronous timing
are provided. The often-occurring use of linear
functions for aggregation for social network models
may be based on a more general belief that dynam-
ical system models can be analyzed better for linear
functions than for nonlinear functions. Although
there may be some truth in this if speci¯cally logistic
nonlinear functions are compared to linear functions,
such a belief is not correct in general. It has been
found that also classes of nonlinear functions exist
that enable good analysis possibilities when it comes
to the emerging dynamics within a network model,
thereby among others not using any conditions on
the connectivity but instead exploiting for any net-
work its structure of strongly connected components.
In Sec. 6.1 it is shown that for the nonadaptive
case this network-oriented modeling approach is
equivalent to any dynamical systems modeling ap-
proach (Theorem 1 and Corollary 1), and in Sec. 6.2
that for the adaptive case self-modeling networks are
equivalent to any adaptive dynamical systems ap-
proach (Theorem 2 and Corollary 2). In Sec. 6.3
equilibrium analysis of network models is provided
and applied to the model introduced earlier.
6.1. Dynamical Systems and their Canonical
Network Representation
Dynamical systems are usually speci¯ed in certain
mathematical formats; see pp. 241252 of Ref. 44 for
some details. In the ¯rst place, a ¯nite set of states
(or state variables) X1;...;Xnis assumed describing
how the system changes over time via functions
X1ðtÞ;...;XnðtÞof time t. As discussed by Ashby
44
and Port and Gelder,
45
a dynamical system is a
state-determined system which can be formalized in
a numerical manner by a relation (rule of evolution)
that expresses how for each time point tthe future
value of each state Xiat time tþsuniquely depends
on sand on X1ðtÞ;...;XnðtÞ. Therefore, a dynamical
system can be described via nfunctions FjðV1;...;
Vn;sÞfor each Xjin the following manner (see also
pp. 243244 of Ref. 44):
XjðtþsÞ¼FjðX1ðtÞ;...;XnðtÞ;sÞfor s>0:ð7Þ
If these functions Fjand the Xjare continuously
di®erentiable (which also implies they are continu-
ous), we call the dynamical system smooth. Suppose
such a smooth dynamical system is given. It turns
out that it can always be described in a canonical
manner by a temporalcausal network model; the
argument is as follows. Consider (7) where the
functions Fiand Xiare continuously di®erentiable.
In the particular case of sapproaching 0 it holds
lims#0XjðtþsÞ¼lims#0FjðX1ðtÞ;...;XnðtÞ;sÞ
which due to continuity of the involved functions
implies
XjðtÞ¼FjðX1ðtÞ;...;XnðtÞ;0Þ:ð8Þ
So, Eq. (7) also holds for s¼0.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-18
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Let X0
jdenote the derivative of Xjwith respect to
time. Applying the partial derivative @ð::Þ=@son
both sides of Eq. (1)
XjðtþsÞ¼FjðX1ðtÞ;...;XnðtÞ;sÞ:
Then it follows
@XjðtþsÞ=@s¼@FjðX1ðtÞ;...;XnðtÞ;sÞ=@s:
Here for the left-hand side, by the chain rule for
function composition it holds
@XjðtþsÞ=@s¼X0
jðtþsÞ@ðtþsÞ=@s¼X0
jðtþsÞ:
So, it is found that for all tand sit holds
X0
jðtþsÞ¼@FjðX1ðtÞ;...;XnðtÞ;sÞ=@s:ð9Þ
In particular, this holds for s¼0; therefore
X0
jðtÞ¼½@FjðX1ðtÞ;...;XnðtÞ;sÞ=@ss¼0:ð10Þ
For a more detailed explanation of the argument for
(10), see Appendix A, where also the di®erences and
relations with Ashby's approach
44
are discussed.
Now de¯ne the (combination) function gjðV1;...;
VnÞby
gjðV1;...;VnÞ
¼Vjþ½@FjðV1;...;Vn;sÞ=@ss¼0:ð11Þ
Then it holds
dXjðtÞ=dt ¼½@FjðX1ðtÞ;...;XnðtÞ;sÞ=@ss¼0
¼gjðX1ðtÞ;...;XnðtÞÞ XiðtÞ:ð12Þ
Comparing Eq. (12) to the canonical format (1) in
Sec. 3that de¯nes dynamics of temporalcausal
networks, it immediately follows that this matches
each other as long as the speed factors ZZ and con-
nection weights oo are set at 1, i.e. from (12) it fol-
lows:
dXjðtÞ=dt ¼ZZXj½cXjðooX1;XjX1ðtÞ;...;ooXn;XjXnðtÞÞ
XjðtÞ ð13Þ
with ZZXj¼1 and cXj¼gjfor all jand ooXi;Xj¼1for
all iand j. For an example of this, see Box 1.
Box 1. Example of the canonical transformation
of any smooth dynamical system into tempor-
alcausal network format.
Consider the following example dynamical
system from p. 244 of Ref. 44:
X1ðtþsÞ¼X1ðtÞþX2ðtÞsþs2;
X2ðtþsÞ¼X2ðtÞþ2s:
This can be formalized in the format of (1) by
X1ðtþsÞ¼F1ðX1ðtÞ;X2ðtÞ;sÞ;
X2ðtþsÞ¼F2ðX1ðtÞ;X2ðtÞ;sÞ;
where the functions F1and F2are de¯ned by
F1ðV1;V2;sÞ¼V1þV2sþs2;
F2ðV1;V2;sÞ¼V2þ2s:
Then
½@F1ðV1;V2;sÞ=@ss¼0¼½V2þ2ss¼0¼V2;
½@F2ðV1;V2;sÞ=@ss¼0¼½2s¼0¼2:
This leads to the di®erential equations
dX1ðtÞ=dt ¼X2ðtÞ;
dX2ðtÞ=dt ¼2:
When the (combination) functions g1and g2
are de¯ned by
g1ðV1;V2Þ¼V1þ½@F1ðV1;V2;sÞ=@ss¼0¼V1þV2;
g2ðV1;V2Þ¼V2þ½@F2ðV1;V2;sÞ=@ss¼0¼V2þ2
then the following is obtained:
dX1ðtÞ=dt ¼g1ðX1ðtÞ;X2ðtÞÞ X1ðtÞ;
dX2ðtÞ=dt ¼g2ðX1ðtÞ;X2ðtÞÞ X2ðtÞ:
This is equivalent to
dX1ðtÞ=dt ¼ZZX1½cX1ðooX1;X1X1ðtÞ;ooX2;X1X2ðtÞÞ
X1ðtÞ;
dX2ðtÞ=dt ¼ZZX2½cX2ðooX1;X2X1ðtÞ;ooX2;X2X2ðtÞÞ
X2ðtÞ
with ZZXj¼1andcXj¼gjfor all jand ooXi;Xj
¼1 for all iand j.
Modeling Emerging Interpersonal Synchrony
2350038-19
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
This shows thatany given smooth dynamical system
can be formalized in this canonical manner by a repre-
sentation in the temporalcausal network format; this
notion is described in more detail in De¯nition 1and
Theorem 1. Note that this also shows theoreticallythat
the use of speci¯c values for speed factors and connec-
tion weights is not essential, as they can all be set to 1.
However, they still are convenient instruments in the
practice of modeling real-world processes.
De¯nition 1 [Canonical network representa-
tion of a smooth dynamical system]. Let any
smooth dynamical system be given by
XjðtþsÞ¼FjðX1ðtÞ;...;XnðtÞ;sÞfor s0;j¼1;...;n;
where the functions Fiare continuously di®erentiable.
Then the canonical temporalcausal network repre-
sentation of it is de¯ned by network characteristics
ooXi;Xj,cXj,ZZXjfor all iand j
with
ooXi;Xj¼1 for all iand j,
cXjðV1;...;VnÞ¼Vjþ½@FjðV1;...;Vn;sÞ=@ss¼0,
ZZXj¼1 for all j.
This network representation has dynamics in-
duced by the following canonical di®erential equa-
tions for temporalcausal networks
dXjðtÞ=dt ¼ZZXj½cXjðooX1;XjX1ðtÞ;...;ooXn;XjXnðtÞÞ
XjðtÞ:
So, by the argument above, the following theorem is
obtained:
Theorem 1 (The canonical network represen-
tation of a smooth dynamical system). Any
smooth dynamical system can be formalized in a
canonical manner by a temporalcausal network
model called its canonical network representation.
Conversely,any temporalcausal network model is a
dynamical system model.
As a corollary from Theorem 1 the following well-
known result immediately follows.
Corollary 1 (From smooth dynamical system
to ¯rst-order di®erential equations). Any
smooth dynamical system can be formalized as a
system of ¯rst-order di®erential equations.
The latter result was also proven in a di®erent
way in pp. 241252 of Ref. 44. See Appendix A for
some more details.
6.2. Adaptive dynamical systems and their
canonical self-modeling network
representation
In this section, it is shown how the approach de-
scribed in Sec. 6.1 can be extended to obtain a
transformation of any smooth adaptive dynamical
system into a self-modeling network model. Adaptive
dynamical systems are usually modeled by two levels
of dynamical systems (see Fig. 8).
Here the higher level dynamical system models
the dynamics of the parameters Pi;jof the lower level
dynamical system (the lower level component in
Fig. 8) that describes the dynamics of variables Xi,
for example by
XjðtþsÞ¼FjðPj;1;...;Pj;k;X1ðtÞ;...;XnðtÞ;sÞ
for s>0:ð14Þ
In addition, for the dynamics of the ooPi;1Pi;jthere
will also be a dynamical system (the upper level
component in Fig. 8) for s0:
Pi;jðtþsÞ¼Gi;jðP1;1ðtÞ;...;Pn;kðtÞ;X1ðtÞ;...;XnðtÞ;sÞ:
ð15Þ
By applying the argument from Sec. 6.2 to both
levels, the following di®erential equations are obtained
covering the entire adaptive dynamical system:
dXiðtÞ=dt ¼ZZXi½cXiðooPi;1;XiPi;1ðtÞ;...;
ooPi;k;XiPi;kðtÞ;ooX1;XiX1ðtÞ;...;
ooXn;XiXnðtÞÞ XiðtÞ;
dPi;jðtÞ=dt ¼ZZPi;j½cPi;jðooPi;1;Pi;jPi;1ðtÞ;...;
ooPi;k;Pi;jPi;kðtÞ;ooX1;Pi;jX1ðtÞ;...;
ooXn;Pi;jXnðtÞÞ Pi;jðtÞ;
ð16Þ
Fig. 8. Overall picture of an adaptive dynamical system.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-20
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
where all ZZ and oo are 1. Recall from Sec. 3the canonical
di®erential equation (3) that de¯nes a self-modeling
network model for the case when self-model states Pare
introduced for the combination function parameters ppY
for all base level states Y:
dY ðtÞ=dt ¼ZZY½cPYðtÞ;YðooX1;YX1ðtÞ;...;
ooXk;YXkðtÞÞ YðtÞ
where PYðtÞ¼ðP1;YðtÞ;...;Pm;YðtÞÞ.
The ¯rst equation of (15) is (although in a slightly
di®erent mathematical notation) equal to Eq. (3),
which shows that the former equation de¯nes a self-
modeling temporalcausal network model. In this
self-modeling network model, the parameters Pi;j
from the adaptive dynamical system are modeled by
(aggregation) self-model P-states within the self-
modeling network model for parameters in the com-
bination functions used for states Xiin the base net-
work de¯ned by the states Xi. In this way a canonical
self-modeling network representation is obtained for
the considered smooth adaptive dynamical system;
this notion is de¯ned by De¯nition 2.
De¯nition 2 [Canonical self-modeling network
representation of a smooth adaptive dynami-
cal system]. Let any smooth adaptive dynamical
system for s0, j¼1;...;n, and for i¼1;...;kbe
given by
XjðtþsÞ¼FjðPj;1;...;Pj;k;X1ðtÞ;...;XnðtÞ;sÞ;
Pi;jðtþsÞ¼Gi;jðP1;1ðtÞ;...;Pn;kðtÞ;X1ðtÞ;...;
XnðtÞ;sÞ;
where the functions Fjand Pi;jare continuously
di®erentiable. Then the canonical self-modeling
network representation of it is de¯ned by character-
istics oo,pp,ZZ where all oo and ZZ are 1 and
cXjðWi;1;...;Wi;k;V1;...;VnÞ
¼Viþ½@FiðWi;1;...;Wi;k;V1;...;Vn;sÞ=@ss¼0;
cPi;jðWi;1;...;Wi;k;V1;...;VnÞ
¼Wi;jþ½@GiðWi;1;...;Wi;k;V1;...;Vn;sÞ=@ss¼0:
This self-modeling network representation has
dynamics induced by the following canonical di®er-
ential equations:
dXiðtÞ=dt ¼ZZXi½cXiðooPi;1;XiPi;1ðtÞ;...;
ooPi;k;XiPi;kðtÞ;ooX1;XiX1ðtÞ;...;
ooXn;XiXnðtÞÞ XiðtÞ;
dPi;jðtÞ=dt ¼ZZPi;j½cPi;jðooPi;1;Pi;jPi;1ðtÞ;...;
ooPi;k;Pi;jPi;kðtÞ;ooX1;Pi;jX1ðtÞ;...;
ooXn;Pi;jXnðtÞÞ Pi;jðtÞ:
Thus, by the argument preceeding De¯nition 2,
the following theorem was obtained.
Theorem 2 (The canonical self-modeling net-
work representation of an adaptive dynamical
system). Any adaptive smooth dynamical system
model can be transformed in a canonical manner into a
self-modeling network model called its canonical self-
modeling network representation described by De¯ni-
tion 2above.Conversely,any self-modeling network
model is an adaptive dynamical system model.These
also apply to higher-order adaptive dynamical systems
in relation to higher-order self-modeling networks.
As a corollary it now follows that any adaptive
dynamical system can be described by ¯rst-order
di®erential equations:
Corollary 2 (From a smooth adaptive dyna-
mical system to ¯rst-order di®erential equa-
tions). Any smooth adaptive dynamical system can
be formalized as a system of ¯rst-order di®erential
equations.
Theorems 1and 2demonstrate that a modeling
approach based on the self-modeling network format
is at least as general as any other adaptive dynamical
system modeling approach. Therefore the choice for
this format to model adaptive dynamical systems
does not introduce any limitation. In particular, it
also can be viewed as generalizing the most common
types of neural network models.
7. Stationary Point and Equilibrium Analysis
for Self-Modeling Networks
In this section, it is shown how stationary point and
equilibrium analysis can be performed for self-
modeling networks (Sec. 7.1) and how this can be
applied to verify the correctness of the implemented
self-modeling network model introduced in Sec. 4
compared to its design speci¯cations (Sec. 7.2).
7.1. The general analysis approach
The following types of properties are often considered
for equilibrium analysis of dynamical systems in general.
Modeling Emerging Interpersonal Synchrony
2350038-21
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
De¯nition 3 (Stationary point,increasing,
decreasing,equilibrium). Let Ybe a network
state
.Yhas a stationary point at tif dYðtÞ=dt¼0,
.Yis increasing at tif dYðtÞ=dt>0,
.Yis decreasing at tif dYðtÞ=dt<0,
.The network model is in equilibrium at tif
every state Yof the model has a stationary point
at t.
Note that for mathematical analysis of dynamical
system models, usually there is an emphasis on
equilibrium analysis. However, in many cases no
equilibria occur, for example in cases of oscillatory
limit cycle behavior. In such cases, still stationary
points can be analyzed. In the case, considered in this
paper no equilibria occur when the environment is
changing all the time.
By considering the canonical network represen-
tation of a dynamical system, the above criteria are
formulated in terms of the network characteristics:
for network models, the following criteria in terms
of the network characteristics ooX;Y,cY,ZZYcan be
derived from the generic di®erence equation (1).
38
Let Ybe a state and X1;...;Xkthe states con-
nected toward Y. For nonzero speed factors ZZY,
the following criteria in terms of network char-
acteristics for connectivity and aggregation
apply; here aggimpactYðtÞ¼cYðooX1;YX1ðtÞ
ooXk;YXkðtÞÞ:
.Yhas a stationary point at t,aggimpactYðtÞ¼YðtÞ
.Yis increasing at t,aggimpactYðtÞ>YðtÞ
.Yis decreasing at t,aggimpactYðtÞ<YðtÞ
.The network model is in equilibrium at t,
aggimpactYðtÞ¼YðtÞfor every state Y.
The above criteria for a network being in an equi-
librium (assuming nonzero speed factors) depend
both on the connections weights ooX;Yused for con-
nectivity and on the combination function cYused
for aggregation. Note that in a self-modeling net-
work, these criteria can be applied not only to base
states but also to self-model states. In the latter case
they can be used for equilibrium analysis of learning
or adaptation processes.
In particular, a network model with states X1;...;
Xnis in equilibrium if and only if the following
nequations (called equilibrium equations) are
satis¯ed:
aggimpactX1ðtÞ¼X1ðtÞ;
.........
.........
aggimpactXnðtÞ¼XnðtÞ:
These equations express relations between values
of the states in an equilibrium: they indicate how
values in an equilibrium relate to each other and
contain as parameters network characteristics ooXi;Xj
and cXj. Sometimes it is possible to solve these
equations, for example, when they are linear, or
when they are nonlinear Euclidean or geometric
equations. When there is no equilibrium, still sta-
tionary points for a given state Xiat some time point
tcan be analyzed based on the above criteria, for
example, (local) maxima or minima of the function
XiðtÞ.
The above criteria can be used to verify correct-
ness of (the implementation of) a network model
based on inspection of stationary points or equilibria
in the following manner.
Veri¯cation by checking the criteria
through substitution
(1) Generate a simulation.
(2) For a sample of states Xjidentify stationary
points with their time points tand state values
XjðtÞ.
(3) For each of these stationary points for a state Xj
from the chosen sample at time t, identify the
values XiðtÞat time tof states Xiamong X1;...;
Xnthat are connected toward Xj.
(4) Substitute all these values XiðtÞin the criterion
aggimpactXjðtÞ¼XjðtÞ.
(5) If the equation holds (for example, with absolute
deviation <102), then this test succeeds, oth-
erwise it fails.
(6) If this test fails, then it should be explored what
error is causing this failure and how this error
can be corrected.
(7) If the test succeeds, this contributes to evidence
that the implemented network model is correct
in comparison with its design speci¯cation.
In Sec. 7.2, it is shown in detail how this form of
veri¯cation by substitution can be applied for an
example network.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-22
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
7.2. Analysis of the introduced neural
agent model
The procedure for testing on correctness described in
Sec. 7.1 has been applied to the neural agent model
introduced in Sec. 4of this paper. It has been applied
for two di®erent scenarios, in each of them for a
sample of states covering all levels of the model. One
scenario used for this type of test is a scenario
where an equilibrium occurs, see Fig. 9. This was
achieved by setting the external factors for stimuli
and communication enabling to constant values 1
instead of random values as applied in the scenarios
in Sec. 5where no equilibria occur due to this
randomness.
Fig. 9. Simulation scenario for a constant environment leading to an equilibrium. Upper graph: Initial phase time 050. Lower
graph: Up to time 1000. For a complete legend for the colors, see Fig. 10.
Modeling Emerging Interpersonal Synchrony
2350038-23
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
The analysis focuses on time t¼1000 and the chosen
sample consists of the following states Xj: the sec-
ond-order self-model state HWB(X105, green line in
Fig. 9ending up around 0.01), ¯rst-order self-model
state Texecv;B(X95, orange line ending up around 0.2),
second-order self-model state HTB(X103, red line
ending up around 0.75), base states reps;B(X31, or-
ange line ending up around 0.92), wsv;B;A(X53 ,
purple line ending up around 0.97),
intrasyncdetA;mv(X19, blue line ending up at time
20 around 1), intersyncdetB;A;m(X21, blue line
ending up at time 50 around 1), and ¯rst-order
self-model state Wprepexecv;B(X71, blue line ending
up at time 500 around 1). To calculate
aggimpactXjðtÞ¼cYðooX1;YX1ðtÞ;ooXk;YXkðtÞÞ ð17Þ
for each of these chosen states Xj, from the
network characteristics the states Xiare determined
with connections to Xj(see Table 3, second column)
and for each of these Xithe weights ooXi;XjðtÞof
the connection from Xito Xj(see Table 3, mid-
dle part), and from the simulation data the values
XiðtÞof the Xiat time t(see Table 3, right-hand
part).
Fig. 10. Legend for the colors in Figs. 9and 11.
Table 3. Weights ooXi;XjðtÞand simulation values XiðtÞfor incoming connections from states Xito Xjused for equilibrium
analysis for the scenario with constant environment depicted in Fig. 9.
XjIncoming XiConnection weights ooXi;XjðtÞat tState values XiðtÞat t
X19 X24,X26 1 1 0.976171 0.976334
X21 X24,X71 1 0.976171 0.976171
X31 X27 11
X53 X3,X47 1 0 0.976334 0
X71 X39X44 1 1 1 1 1 1 1 0.999833 0.999833 1 1 1
X95 X39X44 ,X50.12 0.12 0.12 0.12 0.12 0.12 1 1 0.999833 0.999833 1 1 1 1
X103 X31 0.08 0.917915
X105 X31 1 0.917915
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-24
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Fig. 11. Simulation scenario for a nonrandomly changing environment. For a legend for the colors, see Fig. 10.
Table 4. Equilibrium analysis for the scenario with constant environment depicted in Fig. 9.
State XjTime point tState value XjðtÞ
Incoming
states Xi
Impact values
ooXi;XjðtÞXiðtÞaggimpactXjðtÞDeviation
intrasyncdetA;mvX19 1000 0.999832697 X24 0.97617082 0.999832697 3:31011
X26 0.976334164
intersyncdetB;A;mX21 1000 0.999999995 X24 0.97617082 0.999999996 1:21010
X70.976170816
reps;BX31 1000 0.917915001 X27 1 0.917915001 <1017
wsv;B;AX53 1000 0.976334161 X30.976334164 0.976334164 2:9109
X47 0
Wprepexecv;BX71 1000 0.99944441 X39 1 0.999446296 1:9106
X40 0.999832697
X41 0.999832697
X42 0.999999995
X43 0.999999995
X44 0.999999995
Texecv;BX95 1000 0.188195503 X39 0.120000000 0.188195503 3:51011
X40 0.119979924
X41 0.119979924
X42 0.119999999
X43 0.119999999
X44 0.119999999
X51
HWBX103 1000 0.012837073 X31 0.0734332 0.012837073 2:91017
HTBX105 1000 0.740701054 X31 0.917915001 0.740701054 <1017
Modeling Emerging Interpersonal Synchrony
2350038-25
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Based on these connection weights ooXi;XjðtÞand
state values XiðtÞin Table 3, the products ooXi;XjðtÞ
XiðtÞare determined (see Table 4, sixth column).
Then by applying the combination function from
Table 1on these products, via Eq. (17)
aggimpactXjðtÞis determined for each Xj(see one
but last column in Table 4). Finally, this value is
compared to the state value XjðtÞof Xjto obtain the
Table 5. Stationary point analysis for the scenario with nonrandom nonconstant environment depicted in Fig. 11.
State XjTime point tState value XjðtÞ
Incoming
states Xi
Impact values
ooXi;XjðtÞXiðtÞaggimpactXjðtÞDeviation
senseA;v;BX30 1999 0.972980783 X47 0.972980783 0.972980783 <10 17
X50 0.972980783
reps;BX31 1949 0.917915001 X27 1 0.917915001 <1017
intrasyncdetB;b-vX41 1949 0.999816153 X46 0.973237336 0.999816356 2:0107
X47 0.973416098
X41 0.999816153
intrasyncdetB;b-vX41 1999 0.978349541 X46 0.951915303 0.978349541 5:71013
X47 0.972980783
X41 0.978349541
intersyncdetA;B;vX44 1949 0.999971441 X47 0.973416098 0.999971816 3:8107
X30 0.973388663
X44 0.999971441
intersyncdetA;B;vX44 1999 1 X47 0.973416098 0.999552797 0.000447203
X30 0.972980783
X44 1
wsb;B;AX52 1949 0.973218991 X46 0.947347128 0.947347128 0.025871863
X30
wsb;B;AX52 1999 0.951915303 X46 0.926195296 0.926195296 0.025720006
X30
wsv;B;AX53 1999 0.972980783 X47 0.972980783 0.972980783 <10 17
X30
Wprep-execv;BX71 1999 0.976505645 X39 1 0.999313691 0.022808046
X40 0.978349541
X41 0.978349541
X42 1
X43 1
X44 1
Texecv;BX95 1649 0.404971073 X39 0.12 0.404971648 5:8107
X40 0.119924731
X41 0.119924731
X42 0.060065522
X43 0.060065522
X44 0.060065218
X51
Texecv;BX95 1999 0.188288623 X39 0.12 0.193456528 0.005167905
X40 0.117401945
X41 0.117401945
X42 0.12
X43 0.12
X44 0.12
X51
HWBX103 1949 0.005872159 X31 0.0367166 0.005872159 <10 17
HWBX103 1999 1:41014 X31 0.039982777 0.006444919 0.006444919
HTBX105 1949 0.740701054 X31 0.917915001 0.740701054 <1017
HTBX105 1999 4:36807 1013 X31 0.999569427 0.006444919 0.006444919
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-26
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
deviation ¼aggimpactXjðtÞXjin the last col-
umn of Table 4. In Table 4, all absolute deviations
are smaller than 106. This provides evidence that
the implemented model is correct with respect to its
design speci¯cations. Note that this provides evi-
dence for correctness of the model in general, not
only for this special scenario, as if there were errors,
they would most probably also show their e®ects in
this example scenario.
Yet, for still more evidence another scenario has
been analyzed as well, where the environmental
factors for stimuli and communication enabling do
change but in a nonrandom manner; see Fig. 11.
Here, for both agents, stimulus soccurs from time
0 to time 450 and disappears from 450 to 500; this is
repeated every 500 time units. Moreover, interaction
is not enabled from time 0 to time 50 and is enabled
from time 50 to time 400, which is repeated every 400
time units. Due to the changing environment, no
equilibrium occurs here. However, there are many
cases of (approximate) stationary points. In partic-
ular, stationary points have been analyzed as above
for a sample of states and time points indicated in
the left three columns in Table 5. Here most absolute
deviations are <0:01. However, there are three of
them in the order of 0.02 which is larger than
expected for a stationary point. The graph indeed
shows that they actually are not approximately
stationary points. All in all, also these results provide
evidence that the implemented model is correct with
respect to its design speci¯cations.
8. Discussion
In this paper, a neural agent model was introduced
for the way intrapersonal synchrony and interper-
sonal synchrony induce behavioral adaptivity be-
tween the synchronized persons.
110
In literature, it
was advocated to use a dynamical systems modeling
approach to model the complex, cyclical types of
dynamics that occur.
11,12
The model presented here
is indeed a dynamical system model; moreover, it is
multi-adaptive in that the behavioral adaptivity
covers both short-term adaptations and long-term
adaptations, re°ecting short-term a±liation and
long-term bonding. The former type of adaptation
was modeled using (nonsynaptic) adaptive
excitability,
1518
whereas for the latter type a more
classical synaptic type of adaptation
1922
was used.
Following the aforementioned literature on syn-
chrony, both types of adaptivity were modeled as
driven by the (internally detected) intrapersonal
synchrony and interpersonal synchrony for the
agent. By also including metaplasticity
23
in the
model to control the adaptations in a context-sensi-
tive manner, the agent model became second-order
adaptive. The simulations of the model have been
performed using the dedicated software environment
developed in MATLAB on HP Intel Core i5 and
Apple Macbook Pro Intel Core i9 laptops. Execution
times per run were less than a minute, for example,
on the HP Intel Core i5 with MATLAB 2017a be-
tween 40 and 50 s. The software environment used
can be downloaded via URL https://www.research-
gate.net/publication/368775720.
Thus, this paper has focused on the emerging and
adaptive e®ects of human social interaction, con-
cerning emerging synchronization and related
adaptive a±liation and bonding, with the ther-
apistclient interaction as a central application op-
tion. By such simulations, for example, a therapist
can get insight about how they can improve the way
they interact with their clients and make therapy or
counseling more e®ective. From the scienti¯c per-
spective, the modeling also contributes formalization
to this area of psychology which is almost always
addressed in informal manners. The contributed
approach can also be used as a solid basis for devel-
opment of supporting virtual agents in that context.
Synchrony and related patterns in the brain are
also analyzed, for example, for atypical brain con-
ditions of subjects such as for PTSD,
46
epilepsy,
47,48
ADHD,
49,50
or autism.
51
The work presented in this
paper distinguishes itself from this in four ways: (1)
it abstracts from the speci¯c brain processes and
instead focuses on the level of mental processes, (2) it
addresses not only the emergence of synchrony but
also the causal e®ects of synchronization on adap-
tivity of interaction behavior such as a±liation and
bonding, (3) it does not address the emergence of
synchrony from an objective external observer per-
spective but from a subjective perspective from the
agents themselves, and (4) it focusses on typical in-
stead of atypical conditions of subjects.
We already engaged in computational modeling
of synchrony between agents in earlier work.
52,53
Modeling Emerging Interpersonal Synchrony
2350038-27
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
However, in the models described there, no (subjec-
tive) internal detection of synchrony takes place.
Moreover, in the ¯rst one no adaptivity was cov-
ered,
52
and in the second one another type of adap-
tivity was incorporated, namely of internal
connections from representation states to prepara-
tion states.
53
As far as we know, previous work
37
describes the only other computational agent model
where subjective synchrony detection is addressed.
However, by that model no long-term behavioral
adaptivity is covered and also no adaptive intrinsic
excitability is addressed, whereas both are included
in the current model.
Earlier work
1,29
addressing behavioral adaptation
due to coordinated actions used a dynamic form of
the \bonding based on homophily" principle
54
was
used to model the e®ect of coordination of emotions
and actions on behavioral adaptivity but no (sub-
jective) detection of synchrony was used.
In this paper, also mathematical analysis was
addressed for the modeling approach applied. The
¯rst type of analysis shows that any smooth adaptive
dynamical system has a canonical representation as a
self-modeling network. This implies theoretically
that the self-modeling network format is widely ap-
plicable and that no biases or limitations are intro-
duced by choosing a network modeling approach to
design adaptive dynamical system models. In par-
ticular, it also generalizes the most common neural
system models.
This ¯nding also has been shown in many practical
applications varying from biological, cognitive, a®ec-
tive to social processes and their interaction. It is il-
lustrated by many examples in particular in books
38,39
introducing the self-modeling network modeling ap-
proach and its applications, a book
55
focusing on the
use of self-modeling network models to handle dy-
namics, adaptation and control of internal mental
models, and a book
56
focusing on the use of self-
modeling network models to model organizational
learning processes. Furthermore, stationary point and
equilibrium analysis were addressed and applied to the
introduced self-modeling network model. These anal-
yses were used as a form of veri¯cation of the model,
which provided evidence that the implemented model
is correct with respect to its design speci¯cations.
Thus, a °exible human-like second-order multi-
adaptive neural agent model was obtained for the
way in which detected synchrony leads to di®erent
types of behavioral adaptivity concerning the short-
term a±liation and long-term bonding between the
two agents.
A number of aspects that still may be considered
relevant are not covered by the model introduced
here. One of these aspects is the use of time lags in
the process of synchrony detection. This has not been
addressed here, but can be a relevant extension of the
work reported here. More in general, di®erent com-
bination functions describing methods for synchrony
detection may be tried out either with time lags or
without time lags. Another relevant aspect that is
not addressed in this paper is the role of interrup-
tions or transitions in synchrony and their e®ect on
behavioral adaptivity.
For further work, many more simulation experi-
ments can be designed and conducted, for example to
explore the question which types of short-term syn-
chrony are most likely to become translated into
long-term bene¯ts for a relationship, or to explore in
more detail the roles of intrapersonal synchrony and
interpersonal synchrony.
Also, the model can easily be extended to cover
interaction between more than two agents. To
achieve that, for each additional other agent, within
the agent model the number of sensing states can be
extended by three of them (for each modality) and
similarly three additional representation states and
three interpersonal synchrony detection states can be
added for each additional other agent. Accordingly,
also additional ¯rst-and second-order self-model
states can be added. This will add complexity to the
agent model.
Alternatively, if the model abstracts from the
di®erences between the other agents, then the cur-
rent agent model without any additional states can
be directly applied by using the current sensing, re-
presentation, and synchrony detection states and
also the self-model states as states aggregating all
other agents. This keeps the complexity of the agent
model the same, but of course then the model is less
context-sensitive as di®erent adaptations to speci¯c
other agents are not possible. So, as more often
happens, there is a trade-o® between the complexity
of the agent model and the extent of context-sensi-
tivity here: more context sensitivity comes with more
complexity.
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-28
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Considered from a wider scienti¯c perspective, the
model can provide a basis to develop adaptive virtual
agents that are able to concentrate on each other by
short-term behavioral adaptivity and bond with each
other by long-term behavioral adaptivity in a human-
like manner. For example, in other work
57
the focus is on
virtual conversational agents and how they can adapt to
their human users. In that work, classical learning
techniques from AI are used to optimize the agent's be-
havior with respect to a given user such as Q-learning,
which is not directly inspired or justi¯ed by neuroscience.
In contrast, this paper o®ers approaches such as
synaptic plasticity by adaptive connection
weights
1922
and nonsynaptic plasticity by adaptive
excitability thresholds,
15,16
and in addition meta-
plasticity
23
to control both types of plasticity. As all
of these forms of adaptivity are justi¯ed in neuro-
science literature, this will in principle lead to a more
human-like agent model. Nevertheless, it will be in-
teresting to explore in further work how these two
di®erent perspectives can bene¯t from each other.
Concerning the relation of the considered agent
model to mechanisms from neuroscience, note that
these mechanisms have been incorporated only from
an abstracted functional perspective. This means
that it cannot be claimed that the model is human-
like for the (neuro)physiological level. The latter has
been left out of consideration here and would require
another research project.
Validation of the model has only been done based on
qualitative empirical information from the psycholog-
ical literature. Dynamic and adaptive patterns have
been obtained that are in accordance with that type of
empirical information. Due to the lack of quantitative
(numerical) empirical information, no quantitative
validation has been performed yet. For future research,
it is considered to try to acquire such numerical data
and then perform quantitative numerical validation by
parameter tuning. The way how that can be done is
described in Chap. 19 of our 2022 book.
58
Part of the work presented in this paper was
presented in a preliminary form in the AIAI'22
conference and published in their proceedings as a
paper
59
of less than 50% of the length of this paper.
This paper is limited to the design of the model and
an example simulation. In contrast, the fundamental
mathematical analysis of the positioning of
the modeling approach based on self-modeling
temporalcausal networks in the landscape of
adaptive dynamical systems (described in Secs. 6 and
10) is new. It has been shown there that any smooth
adaptive dynamical system has a canonical repre-
sentation as a self-modeling temporalcausal net-
work, which means that the applied modeling
approach is universal for smooth adaptive dynamical
systems. Moreover, the in-depth veri¯cation of the
introduced model (described in Sec. 7) is also new.
Here, substantial evidence was added that the
implemented model is correct with respect to its
design speci¯cation. Finally, the full speci¯cation of
the model in Appendix B is new as well.
9. Conclusion
All in all, we achieved the following summarized
¯ndings and discoveries:
.Formalization of the informal domain of human
social interaction involving emerging and multiple
types of adaptive dynamical system e®ects is
possible.
.Unifying bridges between causal modeling, net-
work modeling, and dynamical systems modeling
are possible.
.A systematic approach to (higher-order) adaptiv-
ity in these di®erent modeling perspectives is
possible.
More speci¯cally, the following has been achieved:
.The notion of canonical temporalcausal network
representation for any smooth dynamical system is
introduced; it is shown how any smooth dynamical
system can be assigned this canonical network
representation. This also applies to most common
neural network approaches. This creates a bridge
between di®erent subdisciplines that are usually
kept separate: causal modeling in AI, neural net-
works in AI, (multidisciplinary) network science,
computational science. This enables simulation
and analysis of any smooth dynamical system in
terms of network concepts.
.The notion of canonical temporalcausal self-
modeling network representation for any smooth
adaptive dynamical system is introduced; it is
shown how any smooth adaptive dynamical sys-
tem can be assigned this canonical network re-
presentation. This again creates a bridge between
Modeling Emerging Interpersonal Synchrony
2350038-29
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
di®erent subdisciplines that are usually kept sep-
arate and provides a clear way of addressing
higher-order adaptivity to any of these: metalevel
architectures in AI, causal modeling in AI, neural
networks in AI, (multidisciplinary) network sci-
ence, computational science. This enables simula-
tion and analysis of any smooth higher-order
adaptive dynamical system in terms of network
concepts.
.It is shown by mathematical stationary point and
equilibrium analysis how veri¯cation of correct-
ness of the implemented neural agent model in-
troduced (in comparison to its design
speci¯cations) provides more evidence of its cor-
rectness.
.By explaining the full speci¯cation of the intro-
duced neural agent model, reproducibility is
obtained.
Appendix A. More Details for Sec. 6
In this section, a more detailed explanation for the
main argument in Sec. 6.1 for Theorem 1can be
found (see Box 2) and it is discussed how the chosen
approach di®ers from and relates to Ashby's
approach.
Box 2. More detailed explanation of the argument
for Theorem 1.
The di®erences and relationships with the ap-
proach by Ashby
44
are as follows. Instead of (7),
Ashby
44
uses the special case of (7) for t¼0 as in-
dication for a state-determined system:
XjðsÞ¼FjðX1ð0Þ;...;Xnð0Þ;sÞfor s>0:ðA:1Þ
As this special case by itself is not enough to char-
acterize a state-determined system, he furthermore
also uses a second condition for a state-determined
system that can be called transitivity:
FiðX1ðtÞ;...;XnðtÞ;s0þs00Þ
¼FiðF1ðX1ðtÞ;...;XnðtÞ;s0Þ;...;FnðX1ðtÞ;...;
XnðtÞ;s0Þ;s00Þ:ðA:2Þ
So, in the end he characterizes a state-determined
system by the conjunction of conditions (A.1) and
(A.2). It turns out that condition (7) that we use
here to characterize a state-determined system is not
equivalent to (A.1) but to this conjunction of (A.1)
and (A.2), in other words, the following holds:
Theorem 3 (Characterizing state-determined
systems). The following are equivalent:
(i) XjðtþsÞ¼FjðX1ðtÞ;...;XnðtÞ;sÞfor s>0,
(ii) XjðsÞ¼FjðX1ð0Þ;...;Xnð0Þ;sÞfor s>0and
the system is transitive.
Proof. (i) )(ii) That (7) implies transitivity
follows from
Xiðtþðs0þs00 ÞÞ ¼ Xiððtþs0Þþs00 Þ
Assume s>0. Subtracting Eq. (8) from Eq. (7)
(see Sec. 6.1) and dividing by sprovides:
½XjðtþsÞXjðtÞ=s¼½FjðX1ðtÞ;...XnðtÞ;sÞ
FjðX1ðtÞ;...;XnðtÞ;0Þ=s:
When for both sides of this equation the limit for
sapproaching 0 is taken, the left-hand side of
(3) becomes (renaming sto tto get the fa-
miliar expression)
lim
s#0ð½XjðtþsÞXiðtÞ=sÞ
¼lim
t#0
ð½XjðtþtÞXjðtÞ=tÞ
¼dXjðtÞ=dt ¼X0
jðtÞ
(Continued)
and the right-hand side (here renaming sto s
to get the familiar expression)
lim
s#0ð½FjðX1ðtÞ;...XnðtÞ;sÞFjðX1ðtÞ;...;
XnðtÞ;0Þ=sÞ
¼lim
s2ð½FjðX1ðtÞ;...;XnðtÞ;sÞ
FjðX1ðtÞ;...;XnðtÞ;0Þ=sÞ
¼½@FjðX1ðtÞ;...;XnðtÞ;sÞ=@ss¼0
Therefore, it is obtained:
dXiðtÞ=dt ¼d@FiðX1ðtÞ;...;XnðtÞ;sÞ=@ss¼0:
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-30
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
and working out both sides of this:
Xiðtþðs0þs00 ÞÞ ¼ FiðX1ðtÞ;...;XnðtÞ;s0þs00Þ
for s0;s00 >0;
Xiððtþs0Þþs00Þ¼FiðX1ðtþs0Þ;...;Xnðtþs0Þ;s00Þ;
where
Xiðtþs0Þ¼FiðX1ðtÞ;...;XnðtÞ;s0Þ:
So
Xiððtþs0Þþs00Þ¼FiðF1ðX1ðtÞ;...;XnðtÞ;s0Þ;...;
FnðX1ðtÞ;...;XnðtÞ;s0Þ;s00Þ:
This proves transitivity from (7).
(ii) )(i) To be proven
XjðtþsÞ¼FjðX1ðtÞ;...;XnðtÞ;sÞ:ð7Þ
Given
XjðsÞ¼FjðX1ð0Þ;...;Xnð0Þ;sÞfor all s:
So also
XiðtþsÞ¼FiðX1ð0Þ;...;Xnð0Þ;tþsÞ:
Now by transitivity it holds
XjðtþsÞ¼FjðX1ð0Þ;...;Xnð0Þ;tþsÞ
¼FjðF1ðX1ð0Þ;...;Xnð0Þ;tÞ;...;FnðX1ð0Þ;
...;Xnð0Þ;tÞ;sÞ
¼FjðX1ðtÞ;...;XnðtÞ;sÞ:
This proves (7).
This explains how the approach used here di®ers
from but still relates to Ashby's approach.
44
Note that
another main di®erence is that Ashby did not analyze
how (adaptive) dynamical systems can be related to
(adaptive) network models, what is our main focus here.
Appendix B. Further Details of the
Introduced Model
B.1. Overview of all states of the model
Tables B.1 (base states) and B.2 (¯rst- and second-
order self-model states) provide explanations of all
states of the introduced model.
B.2. Full speci¯cation of the model in role
matrices format
In this section, ¯rst some further simulation pictures
are shown for the a±liation patterns represented by
the W-states in relation to the detected intra- and
inter-personal synchronies. Next, the full speci¯ca-
tion of the introduced adaptive network model is
shown in terms of role matrices which are tables with
the network characteristics in standardized table
format. These tables are readable by the dedicated
software environment available via https://www.
researchgate.net/publication/368775720 Network-
Oriented Modeling Software and then can generate
simulations. In this way reproducibility is supported.
In Tables B.3B.7 the full speci¯cation of the
adaptive network model by role matrices is shown.
Each role matrix has 93 rows for all states X1X93 of
the model.
The connectivity characteristics are speci¯ed by
role matrices mb and mcw shown in Tables B.3
and B.4. Role matrix mb lists for each state the
states (at the same or lower level) from which the
state gets its incoming connections, while in role
matrix mcw the connection weights are listed for
these connections.
Nonadaptive connection weights are indicated in
mcw (in Table B.4) by a number (in a green shaded
cell), but adaptive connection weights are indicated
by a reference to the (self-model) W-state represent-
ing the adaptive value (in a peach-red shaded cell).
This can be seen for states X7X9(with self-model
W-states X63X65 Þ, states X11X13 (with self-model
W-states X54X56 Þ,X24X26 (with self-model W-
states X57X59 Þ,X28X30 (with self-model W-states
X75X77 ), X32X34 (with self-model W-states
X66X68 ), and X45X53 (with self-model W-states
X69X71 ,X60X62,andX72X74).
The network characteristics for aggregation are
de¯ned by the selection of combination functions from
the library and values for their parameters. In role
matrix, mcfw it is speci¯ed by weights which state
uses which combination function; see Table B.5.
In role matrix mcfp (see Table B.6) it is indicated
what the parameter values are for the chosen com-
bination functions. A number of them are adaptive:
their adaptive excitability thresholds are represented
by self-model T-states. These concern agent Astates
X11X13 (with excitability threshold self-model T-
states X78X80 )andX24X26 (with self-model T-
states X81X83 ), and for agent Bstates X32 X34
(with excitability threshold self-model T-states
X84X86 )andX35X47 (with self-model T-states
X87X89 ).
Modeling Emerging Interpersonal Synchrony
2350038-31
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.1. Base states of the computational network model.
State Name Explanation
X1wss;AWorld state for stimulus sfor A
X2wss;BWorld state for stimulus sfor B
X3wss;AB World state for stimulus sboth for Aand B
X4conth;AContext state for excitability threshold for A
X5conth;BContext state for excitability threshold for B
X6senses;ASensor state of Afor stimulus sfor A
X7senseB;m;ASensor state of Afor movement mof B
X8senseB;b;ASensor state of Afor expressed a®ective response bof B
X9senseB;v;ASensor state of Afor verbal action vof B
X10 reps;ASensory representation state of Afor stimulus sfor A
X11 repB;m;ASensory representation state of Afor movement mof B
X12 repB;b;ASensory representation state of Afor expressed a®ective response bof B
X13 repB;v;ASensory representation state of Afor verbal action vof B
X14 prepm;APreparation state for movement mof A
X15 prepb;APreparation state for a®ective response bof A
X16 prepv;APreparation state for verbal action vof A
X17 cons emotionb;AConscious emotion state for bof A
X18 intrasyncdetA;b-mIntrapersonal synchrony detection of Afor executing band mby A
X19 intrasyncdetA;m-vIntrapersonal synchrony detection of Afor executing mand vby A
X20 intrasyncdetA;b-vIntrapersonal synchrony detection of Afor executing band vby A
X21 intersyncdetB;A;mInterpersonal synchrony detection of Afor executing mby Band A
X22 intersyncdetB;A;bInterpersonal synchrony detection of Afor executing bby Band A
X23 intersyncdetB;A;vInterpersonal synchrony detection of Afor executing vby Band A
X24 movem;AExecuting movement mby A
X25 exp a®ectb;AExecuting expression of bby A
X26 talkA;B;vExecuting verbal action vby A
X27 senses;BSensor state of Bfor stimulus sfor B
X28 senseA;m;BSensor state of Bfor movement mof A
X29 senseA;b;BSensor state of Bfor expressed a®ective response bof A
X30 senseA;v;BSensor state of Bfor verbal action vof A
X31 reps;BSensory representation state of Bfor stimulus sfor B
X32 repA;m;BSensory representation state of Bfor movement mof A
X33 repA;b;BSensory representation state of Bfor expressed a®ective response bof A
X34 repA;v;BSensory representation state of Bfor verbal action vof A
X35 prepm;BPreparation state for movement mof B
X36 prepb;BPreparation state for a®ective response bof B
X37 prepv;BPreparation state for verbal action vof B
X38 cons emotionb;BConscious emotion state for bof B
X39 intrasyncdetB;b-mIntrapersonal synchrony detection of Bfor executing band mby B
X40 intrasyncdetB;m-vIntrapersonal synchrony detection of Bfor executing mand vby B
X41 intrasyncdetB;b-vIntrapersonal synchrony detection of Bfor executing band vby B
X42 intersyncdetA;B;mInterpersonal synchrony detection of Bfor executing mby Aand B
X43 intersyncdetA;B;bInterpersonal synchrony detection of Bfor executing bby Aand B
X44 intersyncdetA;B;vInterpersonal synchrony detection of Bfor executing vby Aand B
X45 movem;BExecuting movement mby B
X46 exp a®ectb;BExecuting expression of bby B
X47 talkB;A;vExecuting verbal action vby B
X48 wsm;A;BWorld state for transmitting movement mof Ato B
X49 wsb;A;BWorld state for transmitting a®ective response bof Ato B
X50 wsv;A;BWorld state for transmitting verbal action vof Ato B
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-32
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.2. First-order self-model T-states and W-states for excitability thresholds and connection weights and second-order
self-model HT-states and HW-states for the adaptation speed of the T-states and W-states of the computational network
model.
State Name Explanation
X54 Wsense-rep m;AFirst-order self-model state for the weight of A's internal connection from sensing to representing
movement m
X55 Wsense-rep b;AFirst-order self-model state for the weight of A's internal connection from sensing to representing
a®ective response b
X56 Wsense-rep v;AFirst-order self-model state for the weight of A's internal connection from sensing to representing
verbal action v
X57 Wprep-exec m;AFirst-order self-model state for the weight of A's internal connection from preparing to executing
movement m
X58 Wprep-exec b;AFirst-order self-model state for the weight of A's internal connection from preparing to expressing
a®ective response b
X59 Wprep-exec v;AFirst-order self-model state for the weight of A's internal connection from preparing to executing
verbal action v
X60 Wexec-ws m;A;BFirst-order self-model state for the weight of A's external connection from executing to world state for
movement mfor B(enabling interaction from Ato B)
X61 Wexec-ws b;A;BFirst-order self-model state for the weight of A's external connection from executing to world state for
a®ective response bfor B(enabling interaction from Ato B)
X62 Wexec-ws v;A;BFirst-order self-model state for the weight of A's external connection from executing to world state for
verbal action vfor B(enabling interaction from Ato B)
X63 Wws-sense m;B;AFirst-order self-model state for the weight of A's external connection from world state to A's sensing
of movement mfrom B
X64 Wws-sense b;B;AFirst-order self-model state for the weight of A's external connection from world state to A's sensing
of a®ective response bfrom B
X65 Wws-sense v;B;AFirst-order self-model state for the weight of A's external connection from world state to A's sensing
of verbal action vfrom B
X66 Wsense-rep m;BFirst-order self-model state for the weight of B's internal connection from sensing to representing
movement m
X67 Wsense-rep b;BFirst-order self-model state for the weight of B's internal connection from sensing to representing
a®ective response b
X68 Wsense-rep v;BFirst-order self-model state for the weight of B's internal connection from sensing to representing
verbal action v
X69 Wprep-exec m;BFirst-order self-model state for the weight of B's internal connection from preparing to executing
movement m
X70 Wprep-exec b;BFirst-order self-model state for the weight of B's internal connection from preparing to expressing
a®ective response b
X71 Wprep-exec v;BFirst-order self-model state for the weight of B's internal connection from preparing to executing
verbal action v
X72 Wexec-ws m;B;AFirst-order self-model state for the weight of B's external connection from executing to world state for
movement mfor A(enabling interaction from Bto A)
X73 Wexec-ws b;B;AFirst-order self-model state for the weight of B's external connection from executing to world state for
a®ective response bfor A(enabling interaction from Bto A)
Table B.1. (Continued )
State Name Explanation
X51 wsm;B;AWorld state for transmitting movement mof Bto A
X52 wsb;B;AWorld state for transmitting a®ective response bof Bto A
X53 wsv;B;AWorld state for transmitting verbal action vof Bto A
Modeling Emerging Interpersonal Synchrony
2350038-33
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.2. (Continued )
State Name Explanation
X74 Wexec-ws v;B;AFirst-order self-model state for the weight of B's external connection from executing to world state for
verbal action vfor A(enabling interaction from Bto A)
X75 Wws-sense m;A;BFirst-order self-model state for the weight of B's external connection from world state to B's sensing
of movement mfrom A
X76 Wws-sense b;A;BFirst-order self-model state for the weight of B's external connection from world state to B's sensing
of a®ective response bfrom A
X77 Wws-sense v;A;BFirst-order self-model state for the weight of B's external connection from world state to B's sensing
of verbal action vfrom A
X78 Trep m;AFirst-order self-model state for the excitability threshold of A's sensory representation state repm;Afor
movement m
X79 Trep b;AFirst-order self-model state for the excitability threshold of A's sensory representation state repb;Afor
a®ective response b
X80 Trep v;AFirst-order self-model state for the excitability threshold of A's sensory representation state repv;Afor
verbal response v
X81 Texec m;AFirst-order self-model state for the excitability threshold of A's execution state movem;Afor
movement m
X82 Texec b;AFirst-order self-model state for the excitability threshold of A's execution state exp a®ectb;Afor
a®ective response b
X83 Texec v;AFirst-order self-model state for the excitability threshold of A's execution state talkv;Afor verbal
response v
X84 Trep m;BFirst-order self-model state for the excitability threshold of B's sensory representation state repm;Bfor
movement m
X85 Trep b;BFirst-order self-model state for the excitability threshold of B's sensory representation state repb;Bfor
a®ective response b
X86 Trep v;BFirst-order self-model state for the excitability threshold of B's sensory representation state repv;Bfor
verbal response v
X87 Texec m;BFirst-order self-model state for the excitability threshold of B's execution state movem;Bfor
movement m
X88 Texec b;BFirst-order self-model state for the excitability threshold of B's execution state exp a®ectb;Bfor
a®ective response b
X89 Texec v;BFirst-order self-model state for the excitability threshold of B's execution state talkv;Bfor verbal
response v
X90 HWASecond-order self-model state for the speed factor of the ¯rst-order self-model W-states for A
X91 HWBSecond-order self-model state for the speed factor of the ¯rst-order self-model W-states for B
X92 HTASecond-order self-model state for the speed factor of the ¯rst-order self-model T-states for A
X93 HTBSecond-order self-model state for the speed factor of the ¯rst-order self-model T-states for B
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-34
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.3. Role matrix mb for base connectivity.
Modeling Emerging Interpersonal Synchrony
2350038-35
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.4. Role matrix mcw for connection weights.
mcw
connection
wei g h t s
1
2
3
4
5
6
7
8
X1
wss,A
1
X2
wss,B
1
X3
wss,AB
1
X4
conth,A
1
X5
conth,B
1
X6
senses,A
1
1
X7
senseB,m ,A
1
X63
X9
senseB,v ,A
1
X65
X10
reps,A
1
X11
repB,m, A
X54
1
X12
repB,b ,A
X55
1
X13
repB,v, A
X56
1
X14
prepm,A
1
1
X15
prepb,A
1
1
X16
prepv,A
1
1
1
X17
cons_emotionb,A
1
1
1
1
1
X18
intrasyncdetA,b-m
1
1
X19
intrasyncdetA,m-v
1
1
X20
intrasyncdetA,b-v
1
1
X21
int ersyncdetB,A,m
1
1
X22
int ersyncdetB,A,b
1
1
X23
int ersyncdetB,A,v
1
1
X24
movem,A
X57
X25
eXp_affectb,A
X58
X26
talkA,B,v
X59
X27
senses,B
1
1
X28
senseA,m ,B
1
X75
X29
senseA,b , B
1
X76
X30
senseA,v ,B
1
X77
X32
repA,m, B
X66
1
X33
repA,b ,B
X67
1
X34
repA,v, B
X68
1
X35
prepm,B
1
1
X36
prepb,B
1
1
X37
prepv,B
1
1
1
X38
cons_emotionb,B
1
1
1
1
1
X39
intrasyncdetB,b-m
1
1
X40
intrasyncdetB,m-v
1
1
X42
int ersyncdetA,B,m
1
1
X43
int ersyncdetA,B,b
1
1
X44
int ersyncdetA,B,v
1
1
X45
movem,B
X69
X46
eXp_affectb,B
X70
X47
talkB,A,v
X71
X48
wsm,A,B
X60
X49
wsb,A,B
X61
X50
wsv,A, B
X62
X51
wsm,B,A
X72
X52
wsb,B,A
X73
X53
wsv,B, A
X74
X54
Wsense-rep_m,A
1
1
1
1
1
1
X55
Wsense-rep_b,A
1
1
1
1
1
1
X56
Wsense-rep_v,A
1
1
1
1
1
1
X57
Wprep-eXec_m,A
1
1
1
1
1
1
X58
Wprep-eXec_b,A
1
1
1
1
1
1
X59
Wprep-eXec_v,A
1
1
1
1
1
1
X60
WeXec-ws_m,A,B
1
X61
WeXec-ws_b,A,B
1
X62
WeXec-ws_v,A,B
1
X63
Wws-sens e_m,B,A
1
X64
Wws-sens e_b,B,A
1
X65
Wws-sens e_v,B, A
1
X66
Wsense-rep_m,B
1
1
1
1
1
1
X67
Wsense-rep_b,B
1
1
1
1
1
1
X68
Wsense-rep_v,B
1
1
1
1
1
1
X69
Wprep-eXec_m,B
1
1
1
1
1
1
X70
Wprep-eXec_b,B
1
1
1
1
1
1
X71
Wprep-eXec_v,B
1
1
1
1
1
1
X72
WeXec-ws_m,B,A
1
X73
WeXec-ws_b,B,A
1
X74
WeXec-ws_v,B,A
1
X75
Wws-sens e_m,A,B
1
X76
Wws-sens e_b,A,B
1
X77
Wws-sens e_v,A, B
1
X78
T
rep_ m, A
0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X79
T
rep_ b,A
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X80
T
rep_ v, A
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X81
T
eXec_m,A
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X82
T
eXec_b,A
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X83
T
eXec_v,A
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X84
T
rep_ m, B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X85
T
rep_ b,B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X86
T
rep_ v, B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X87
T
eXec_m,B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X88
T
eXec_b,B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X89
T
eXec_v,B
-0.12
-0.12
-0.12
-0.12
-0.12
-0.12
1
X90
HWA
0.01
X91
HWB
0.01
X92
HTA
1
X93
HTB
1
X71
X60
X61
X62
X72
X73
X74
-
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-36
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.5. Role matrix mcfw for combination function weights.
mcfw combinatio n
function weig h t s
1
alogistic
2
com pdi ff
3
randste pmo d
4
randste pmo dopp
5
eucl
X1
wss,A
1
X2
wss,B
1
X3
wss,AB
1
X4
conth ,A
1
X5
conth ,B
1
X6
senses,A
1
X7
senseB,m, A
1
X8
senseB,b , A
1
X9
senseB,v, A
1
X10
reps,A
1
X11
repB,m, A
1
X12
repB,b ,A
1
X13
repB,v, A
1
X14
prepm,A
1
X15
prepb,A
1
X16
prepv,A
1
X17
cons_emotionb,A
1
X18
intrasyncdetA,b-m
1
X19
intrasyncdetA,m-v
1
X20
intrasyncdetA,b-v
1
X21
intersyncdetB,A,m
1
X22
intersyncdetB,A,b
1
X23
intersyncdetB,A,v
1
X24
movem,A
1
X25
eXp_affectb,A
1
X26
talkA, B,v
1
X27
senses,B
1
X28
senseA,m, B
1
X29
senseA,b , B
1
X30
senseA,v, B
1
X31
reps,B
1
X32
repA,m, B
1
X33
repA,b ,B
1
X34
repA,v, B
1
X35
prepm,B
1
X36
prepb,B
1
X37
prepv,B
1
X38
cons_emotionb,B
1
X39
intrasyncdetB,b-m
1
X40
intrasyncdetB,m-v
1
X41
intrasyncdetB,b-v
1
X42
intersyncdetA,B,m
1
X43
intersyncdetA,B,b
1
X44
intersyncdetA,B,v
1
X45
movem,B
1
X46
eXp_affectb,B
1
X47
talkB, A,v
1
X48
wsm,A,B
1
X49
wsb,A,B
1
X50
wsv,A, B
1
X51
wsm,B,A
1
X52
wsb,B,A
1
X53
wsv,B, A
1
X54
Wsense-rep_m,A
1
X55
Wsense-rep_b,A
1
X56
Wsense-rep_v,A
1
X57
Wprep-eXec_m,A
1
X58
Wprep-eXec_b,A
1
X59
Wprep-eXec_v,A
1
X60
WeXec-ws_m,A,B
1
X61
WeXec-ws_b,A,B
1
X62
WeXec-ws_v,A,B
1
X63
Wws-sens e_m,B,A
1
X64
Wws-sens e_b,B, A
1
X65
Wws-sens e_v,B,A
1
X66
Wsense-rep_m,B
1
X67
Wsense-rep_b,B
1
X68
Wsense-rep_v,B
1
X69
Wprep-eXec_m,B
1
X70
Wprep-eXec_b,B
1
X71
Wprep-eXec_v,B
1
X72
WeXec-ws_m,B,A
1
X73
WeXec-ws_b,B,A
1
X74
WeXec-ws_v,B,A
1
X75
Wws-sens e_m,A,B
1
X76
Wws-sens e_b,A, B
1
X77
Wws-sens e_v,A,B
1
X78
T
rep_ m, A
1
X79
T
rep_ b,A
1
X80
T
rep_ v, A
1
X81
T
eXec_m,A
1
X82
T
eXec_b,A
1
X83
T
eXec_v,A
1
X84
T
rep_ m, B
1
X85
T
rep_ b,B
1
X86
T
rep_ v, B
1
X87
T
eXec_m,B
1
X88
T
eXec_b,B
1
X89
T
eXec_v,B
1
X90
HWA
1
X91
HWB
1
X92
HTA
1
X93
HTB
1
Modeling Emerging Interpersonal Synchrony
2350038-37
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
Table B.6. Role matrix mcfp for combination function parameters.
mcfp combinatio n
function parameters
1
alogisti c
2
compdi ff
3
rands te pmod
4
ran ds te pm odo pp
5
eucl
X1
wss,A
120
60
X2
wss,B
120
60
X3
wss,AB
120
60
X4
conth,A
5
0.5
X5
conth,B
5
0.5
X6
senses,A
1
1
X7
senseB,m, A
1
2
X8
senseB,b , A
1
2
X9
senseB,v, A
1
2
X10
reps,A
5
0.5
X11
repB,m, A
5
X78
X12
repB,b ,A
5
X79
X13
repB,v, A
5
X80
X14
prepm,A
5
0.6
X15
prepb,A
5
0.6
X16
prepv,A
5
0.9
X17
cons_emotionb, A
5
3.25
X18
intrasyncdetA,b-m
X19
intrasyncdetA, m-v
X20
intrasyncdetA,b-v
X21
intersyncdetB,A,m
X22
intersyncdetB,A,b
X23
intersyncdetB,A,v
X24
movem,A
5
X81
X25
eXp_affectb,A
5
X82
X26
talkA,B,v
5
X83
X27
senses,B
1
1
X28
senseA,m, B
1
2
X29
senseA,b , B
1
2
X30
senseA,v, B
1
2
X31
reps,B
5
0.5
X32
repA,m, B
5
X84
X33
repA,b ,B
5
X85
X34
repA,v, B
5
X86
X35
prepm,B
5
0.6
X36
prepb,B
5
0.6
X37
prepv,B
5
0.9
X38
cons_emotionb, B
5
3.25
X39
intrasyncdetB,b-m
X40
intrasyncdetB, m-v
X41
intrasyncdetB,b-v
X42
intersyncdetA,B,m
X43
intersyncdetA,B,b
X44
intersyncdetA,B,v
X45
movem,B
5
X87
X46
eXp_affectb,B
5
X88
X47
talkB,A,v
5
X89
X48
wsm,A,B
1
1
X49
wsb,A,B
1
1
X50
wsv,A, B
1
1
X51
wsm,B,A
1
1
X52
wsb,B,A
1
1
X53
wsv,B, A
1
1
X54
Wsense-rep_m,A
5
4.5
X55
Wsense-rep_b,A
5
4.5
X56
Wsense-rep_v,A
5
4.5
X57
Wprep-eXec_m,A
5
4.5
X58
Wprep-eXec_b,A
5
4.5
X59
Wprep-eXec_v,A
5
4.5
X60
WeXec-ws_m,A,B
60
30
X61
WeXec-ws_b,A,B
60
30
X62
WeXec-ws_v,A,B
60
30
X63
Wws-sens e_m,B,A
5
4.5
X64
Wws-sens e_b,B, A
5
4.5
X65
Wws-sens e_v,B,A
5
4.5
X66
Wsense-rep_m,B
5
4.5
X67
Wsense-rep_b,B
5
4.5
X68
Wsense-rep_v,B
5
4.5
X69
Wprep-eXec_m,B
5
4.5
X70
Wprep-eXec_b,B
5
4.5
X71
Wprep-eXec_v,B
5
4.5
X72
WeXec-ws_m,B,A
60
30
X73
WeXec-ws_b,B,A
60
30
X74
WeXec-ws_v,B,A
60
30
X75
Wws-sens e_m,A,B
5
4.5
X76
Wws-sens e_b,A, B
5
4.5
X77
Wws-sens e_v,A,B
5
4.5
X78
T
rep_ m, A
5
0.5
X79
T
rep_ b,A
5
0.5
X80
T
rep_ v, A
5
0.5
X81
T
eXec_m,A
5
0.5
X82
T
eXec_b,A
5
0.5
X83
T
eXec_v,A
5
0.5
X84
T
rep_ m, B
5
0.5
X85
T
rep_ b,B
5
0.5
X86
T
rep_ v, B
5
0.5
X87
T
eXec_m,B
5
0.5
X88
T
eXec_b,B
5
0.5
X89
T
eXec_v,B
5
0.5
X90
HWA
5
0.7
X91
HWB
5
0.7
X92
HTA
5
0.7
X93
HTB
5
0.7
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-38
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
In Table B.7, the role matrix ms for speed factors is
shown, which lists all speed factors. Next to it, the list of
initial values can be found. For ms some entries are
adaptive as well. For agent Athe speed factors of W-
states X54X65 are represented by (second-order) self-
model HW-state X90 and for agent Bthe speed factors
of W-states X66X77 are represented by (second-
order) self-model HW-state X91. Moreover, for agent A
the speed factors for T-states X78X83 are represented
by (second-order) self-model HT-state X92 and for
agent Bthe speed factors of T-states X84X89 are
represented by (second-order) self-model HT-state X93.
References
1. M. Accetto, J. Treur and V. Villa, An adaptive cog-
nitive-social model for mirroring and social bonding
during synchronous joint action, Procedia Comput.
Sci. 145 (2018) 312, https://doi.org/10.1016/j.
procs.2018.11.002.
2. J. K. Burgoon, L. Dillman and L. A. Stem, Adapta-
tion in dyadic interaction: De¯ning and oper-
ationalizing patterns of reciprocity and compensation,
Commun. Theory 3(4) (1993) 295316.
3. J. K. Burgoon, L. A. Stem and L. Dillman, Interper-
sonal Adaptation: Dyadic Interaction Patterns
(Cambridge University Press, 1995), https://doi-org.
vu-nl.idm.oclc.org/10.1017/CBO9780511720314.
Table B.7. Role matrix ms for speed factors and iv for initial values.
ms speed factors
1
iv initial values
1
X1
wss,A
1
1
X2
wss,B
1
1
X3
wss,AB
1
0
X4
conth,A
0
1
X5
conth,B
0
1
X6
senses,A
1
0
X7
senseB,m, A
1
0
X8
senseB,b , A
1
0
X9
senseB,v, A
1
0
X10
reps,A
1
0
X11
repB,m, A
1
0
X12
repB,b ,A
1
0
X13
repB,v,A
1
0
X14
prepm,A
1
0
X15
prepb,A
1
0
X16
prepv,A
1
0
X17
cons_emotionb,A
1
0
X18
intrasyncdet
A,b-m
0.5
0
X19
intrasyncdet
A,m-v
0.5
0
X20
intrasyncdet
A,b-v
0.5
0
X21
intersyncdetB,A,m
0.5
0
X22
intersyncdetB,A,b
0.5
0
X23
intersyncdetB,A,v
0.5
0
X24
movem,A
1
0
X25
eXp_affectb,A
1
0
X26
talkA,B,v
1
0
X27
senses,B
1
0
X28
senseA,m, B
1
0
X29
senseA,b , B
1
0
X30
senseA,v, B
1
0
X31
reps,B
1
0
X32
repA,m, B
1
0
X33
repA,b ,B
1
0
X34
repA,v,B
1
0
X35
prepm,B
1
0
X36
prepb,B
1
0
X37
prepv,B
1
0
X38
cons_emotionb,B
1
0
X39
intrasyncdet
B,b-m
0.5
0
X40
intrasyncdet
B,m-v
0.5
0
X41
intrasyncdet
B,b-v
0.5
0
X42
intersyncdetA,B,m
0.5
0
X43
intersyncdetA,B,b
0.5
0
X44
intersyncdetA,B,v
0.5
0
X45
movem,B
1
0
X46
eXp_affectb,B
1
0
X47
talkB,A,v
1
0
X48
wsm,A,B
1
0
X49
wsb,A,B
1
0
X50
wsv,A,B
1
0
X51
wsm,B,A
1
0
X52
wsb,B,A
1
0
X53
wsv,B,A
1
0
X54
Wsense-rep_m,A
X90
0.4
X55
Wsense-rep_b,A
X90
0.4
X56
Wsense-rep_v,A
X90
0.4
X57
Wprep-eXec_m,A
X90
0.4
X58
Wprep-eXec_b,A
X90
0.4
X59
Wprep-eXec_v,A
X90
0.4
X60
WeXec-ws_m,A,B
1
0
X61
WeXec-ws_b,A,B
1
0
X62
WeXec-ws_v,A,B
1
0
X63
Wws-sense_m,B,A
0
1
X64
Wws-sense_b, B, A
0
1
X65
Wws-sense_v,B,A
0
1
X66
Wsense-rep_m,B
X91
0.4
X67
Wsense-rep_b,B
X91
0.4
X68
Wsense-rep_v,B
X91
0.4
X69
Wprep-eXec_m,B
X91
0.4
X70
Wprep-eXec_b,B
X91
0.4
X71
Wprep-eXec_v,B
X91
0.4
X72
WeXec-ws_m,B,A
1
0
X73
WeXec-ws_b,B,A
1
0
X74
WeXec-ws_v,B,A
1
0
X75
Wws-sense_m,A,B
0
1
X76
Wws-sense_b, A, B
0
1
X77
Wws-sense_v,A,B
0
1
X78
T
rep_ m,A
X92
0.7
X79
T
rep_ b,A
X92
0.7
X80
T
rep_ v,A
X92
0.7
X81
T
eXec_m,A
X92
0.7
X82
T
eXec_b,A
X92
0.7
X83
T
eXec_v,A
X92
0.7
X84
T
rep_ m,B
X93
0.7
X85
T
rep_ b,B
X93
0.7
X86
T
rep_ v,B
X93
0.7
X87
T
eXec_m,B
X93
0.7
X88
T
eXec_b,B
X93
0.7
X89
T
eXec_v,B
X93
0.7
X90
HWA
1
0
X91
HWB
1
0
X92
HTA
1
0
X93
HTB
1
0
Modeling Emerging Interpersonal Synchrony
2350038-39
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
4. J. N. Cappella, Mutual in°uence in expressive be-
havior: Adultadult and infantadult dyadic inter-
action, Psychol. Bull. 89(1) (1981) 101.
5. G. Dumas, J. Nadel, R. Soussignan, J. Martinerie and
L. Garnero, Inter-brain synchronization during social
interaction, PloS One 5(8) (2010) e12166.
6. M. J. Hove and J. L. Risen, It's all in the timing:
Interpersonal synchrony increases a±liation, Soc.
Cogn. 27(6) (2009) 949961.
7. S. L. Koole, W. Tschacher, E. Butler, S. Dikker and T.
F. Wilderjans, In sync with your shrink, in Applica-
tions of Social Psychology, eds. J. P. Forgas, W. D.
Crano and K. Fiedler (Taylor and Francis, Milton
Park, 2020), pp. 161184.
8. F. Ramseyer and W. Tschacher, Nonverbal synchrony
in psychotherapy: Coordinated body movement re°ects
relationship quality and outcome, J. Consult. Clin.
Psychol. 79 (2011) 284295, doi: 10.1037/a0023419a.
9. B. Tarr, J. Launay and R. I. M. Dunbar, Silent disco:
Dancing in synchrony leads to elevated pain thresh-
olds and social closeness, Evol. Hum. Behav. 37(5)
(2016) 343349.
10. S. S. Wiltermuth and C. Heath, Synchrony and co-
operation, Psychol. Sci. 20(1) (2009) 15.
11. E. Ferrer and J. L. Helm, Dynamical systems model-
ing of physiological coregulation in dyadic interac-
tions, Int. J. Psychophysiol. 88(3) (2013) 296308.
12. R. M. Warner, Cyclicity of vocal activity increases
during conversation: Support for a nonlinear systems
model of dyadic social interaction, Behav. Sci. 37(2)
(1992) 128138.
13. W. Tschacher, F. Ramseyer and S. L. Koole, Sharing
the now in the social present: Duration of nonverbal
synchrony is linked with personality, J. Pers. 86(2)
(2018) 129138.
14. S. L. Koole and W. Tschacher, Synchrony in psycho-
therapy: A review and an integrative framework for the
therapeutic alliance, Front. Psychol. 7(2016) 862.
15. N. Chandra and E. Barkai, A non-synaptic mecha-
nism of complex learning: Modulation of intrinsic
neuronal excitability, Neurobiol. Learn. Mem. 154
(2018) 3036.
16. D. Debanne, Y. Inglebert and M. Russier, Plasticity of
intrinsic neuronal excitability, Curr. Opin. Neurobiol.
54 (2019) 7382.
17. A. H. Williams, T. O'Leary and E. Marder, Homeo-
static regulation of neuronal excitability, Scholarpedia
8(2013) 1656.
18. A. Zhang, X. Li, Y. Gao and Y. Niu, Event-driven
intrinsic plasticity for spiking convolutional neural
networks, IEEE Trans. Neural Netw. Learn. Syst.
(2021), doi: 10.1109/tnnls.2021.3084955.
19. M. F. Bear and R. C. Malenka, Synaptic plasticity: LTP
and LTD, Curr. Opin. Neurobiol. 4(3) (1994) 389399.
20. D. O. Hebb, The Organization of Behavior: A Neu-
ropsychological Theory (John Wiley and Sons,
New York, 1949).
21. C. J. Shatz, The developing brain, Sci. Am. 267
(1992) 6067.
22. P. K. Stanton, LTD, LTP, and the sliding threshold
for long-term synaptic plasticity, Hippocampus 6(1)
(1996) 3542.
23. W. C. Abraham and M. F. Bear, Metaplasticity: The
plasticity of synaptic plasticity, Trends Neurosci.
19(4) (1996) 126130.
24. B. L. Robinson, N. S. Harper and D. McAlpine, Meta-
adaptation in the auditory midbrain under cortical
in°uence, Nat. Commun. 7(2016) 13442.
25. D. L. Trout and H. M. Rosenfeld, The e®ect of pos-
tural lean and body congruence on the judgment of
psychotherapeutic rapport, J. Nonverbal Behav.
4(1980) 176190.
26. R. E. Maurer and J. H. Tindall, E®ect of postural
congruence on client's perception of counselor empa-
thy, J. Couns. Psychol. 30(2) (1983) 158163, doi:
10.1037/0022-0167.30.2.158.
27. C. F. Sharpley, J. Halat, T. Rabinowicz, B. Weiland
and J. Sta®ord, Standard posture, postural mirroring
and client-perceived rapport, Couns. Psychol. Q.
14 (2001) 267280, doi: 10.1080/09515070110088843.
28. R. Feldman, Parentinfant synchrony biological
foundations and developmental outcomes, Curr. Dir.
Psychol. Sci. 16 (2007) 340345, doi: 10.1111/j.1467-
8721.2007.00532.x.
29. C. Tichelaar and J. Treur, Network-oriented modeling
of the interaction of adaptive joint decision making,
bonding and mirroring, in Proc. 7th Int. Conf. Theory
and Practice of Natural Computing, TPNC'18, Lec-
ture Notes in Computer Science, Vol. 11324 (Springer
Nature, Cham, 2018), pp. 328343.
30. H. B. Laws, A. G. Sayer, P. R. Pietromonaco and S. I.
Powers, Longitudinal changes in spouses' HPA
responses: Convergence in cortisol patterns during the
early years of marriage, Health Psychol. 34(11) (2015)
1076.
31. N. Boot, M. Baas, S. V. Gaal, R. Cools and C. K. W.
D. Dreu, Creative cognition and dopaminergic mod-
ulation of fronto-striatal networks: Integrative review
andresearchagenda,Neurosci. Biobehav. Rev. 78 (2017)
1323.
32. J. Lisman, K. Cooper, M. Sehgal andA. J. Silva, Memory
formation depends on both synapse-speci¯c modi¯cations
of synaptic strength and cell-speci¯c increases in excit-
ability, Nat. Neurosci. 21 (2018) 309314.
33. J. Treur, Temporal factorisation: A unifying principle
for dynamics of the world and of mental states, Cogn.
Syst. Res. 8(2) (2007) 5774.
34. J. Treur, Temporal factorisation: Realisation of me-
diating state properties for dynamics, Cogn. Syst. Res.
8(2) (2007) 7588.
35. P. U. Tse, The Neural Basis of Free Will: Criterial
Causation (MIT Press, Cambridge, 2013).
36. J. Treur, Modeling the emergence of informational
content by adaptive networks for temporal
S. C. F. Hendrikse, J. Treur & S. L. Koole
2350038-40
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
factorisation and criterial causation, Cogn. Syst. Res.
68 (2021) 3452.
37. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S.
Dikker and S. L. Koole, On becoming in sync with
yourself and others: An adaptive agent model for how
persons connect by detecting intra- and interpersonal
synchrony, Hum.-Centric Intell. Syst. J. 3(2023) 123
146, https://www.springer.com/journal/44230 [In
sync with yourself and with others: Detection of intra-
and interpersonal synchrony within an adaptive agent
model, in Face2face: Advancing the Science of Social
Interaction (Royal Society, London), https://www.
researchgate.net/publication/358964043].
38. J. Treur, Network-Oriented Modeling: Addressing
Complexity of Cognitive, A®ective and Social Inter-
actions (Springer Nature, 2016).
39. J. Treur, Network-Oriented Modeling for Adaptive
Networks: Designing Higher-Order Adaptive Biologi-
cal, Mental and Social Network Models (Springer
Nature, 2020).
40. J. Treur, Modeling multi-order adaptive processes by
self-modeling networks (Keynote speech), in Proc.
2nd Int. Conf. Machine Learning and Intelligent
Systems, MLIS'20, eds. A. J. Tallón-Ballesteros and
C.-H. Chen, Frontiers in Arti¯cial Intelligence and
Applications, Vol. 332 (IOS Press, 2020), pp.
206217.
41. A. R. Damasio, The Feeling of What Happens: Body
and Emotion in the Making of Consciousness
(Houghton Mi®lin Harcourt, 1999).
42. G. Hesslow, Conscious thought as simulation of be-
haviour and perception, Trends Cogn. Sci. 6(2002)
242247.
43. D. Grandjean, D. Sander and K. R. Scherer, Con-
scious emotional experience emerges as a function of
multilevel, appraisal-driven response synchronization,
Conscious. Cogn. 17(2) (2008) 484495.
44. W. R. Ashby, Design for a Brain, 2nd extended edn.
(Chapman and Hall, London, 1960).
45. R. F. Port and T. V. Gelder, Mind as Motion:
Explorations in the Dynamics of Cognition (MIT
Press, Cambridge, MA, 1995).
46. J. Zweerings, K. Sarasj
arvi, K. A. Mathiak, J. Iglesias-
Fuster, F. Cong, M. Zvyagintsev and K. Mathiak,
Data-driven approach to the analysis of real-time
fMRI neurofeedback data: Disorder-speci¯c brain
synchrony in PTSD, Int. J. Neural Syst. 31(11)
(2021) 2150043.
47. A. Olamat, P. Ozel and A. Akan, Synchronization
analysis in epileptic EEG signals via state transfer
networks based on visibility graph technique, Int. J.
Neural Syst. 32(2) (2022) 2150041.
48. G. Liu, L. Tian and W. Zhou, Patient-independent
seizure detection based on channel-perturbation con-
volutional neural network and bidirectional long
short-term memory, Int. J. Neural Syst. 32(6) (2022)
2150051.
49. M. Ahmadlou and H. Adeli, Fuzzy synchronization
likelihood with application to attention-de¯cit/hy-
peractivity disorder, Clin. EEG Neurosci. 42(1)
(2011) 613.
50. M. Ahmadlou and H. Adeli, Visibility graph similar-
ity: A new measure of generalized synchronization in
coupled dynamic systems, Phys. D, Nonlinear Phe-
nom. 241(4) (2012) 326332.
51. M. Ahmadlou, H. Adeli and A. Adeli, Fuzzy syn-
chronization likelihood-wavelet methodology for di-
agnosis of autism spectrum disorder, J. Neurosci.
Methods 211(2) (2012) 203209.
52. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S. Dikker
and S. L. Koole, On the same wavelengths: Emergence
of multiple synchronies among multiple agents, in
Proc. 22nd Int. Workshop on Multi-Agent-Based Sim-
ulation, MABS'21, Lecture Notes in Computer Science,
Vol. 13128 (Springer, Cham, 2022), pp. 5771.
53. S. C. F. Hendrikse, S. Kluiver, J. Treur, T. F. Wild-
erjans, S. Dikker and S. L. Koole, How virtual agents
can learn to synchronize: An adaptive joint decision-
making model of psychotherapy, Cogn. Syst. Res. 79
(2023) 138155.
54. M. McPherson, L. Smith-Lovin and J. M. Cook, Birds
of a feather: Homophily in social networks, Annu.
Rev. Sociol. 27(1) (2001) 415444.
55. J. Treur and L. van Ments (eds.), Mental Models and
their Dynamics, Adaptation, and Control: A Self-
Modeling Network Modeling Approach (Springer
Nature, 2022).
56. G. Canbaloğlu, J. Treur and A. Wiewiora (eds.),
Computational Modeling of Multilevel Organisational
Learning and its Control Using Self-Modeling Network
Models (Springer Nature, 2023).
57. B. Biancardi, S. Dermouche and C. Pelachaud, Ad-
aptation mechanisms in humanagent interaction:
E®ects on user's impressions and engagement, Front.
Comput. Sci. 3(2021) 696682.
58. J. Treur, Does this suit me? Validation of self-
modeling network models by parameter tuning, in
Mental Models and their Dynamics, Adaptation, and
Control: A Self-Modeling Network Modeling Ap-
proach, eds. J. Treur and L. V. Ments, Chap. 19
(Springer Nature, 2022), pp. 537565.
59. S. C. F. Hendrikse, J. Treur, T. F. Wilderjans, S.
Dikker and S. L. Koole, On the interplay of inter-
personal synchrony, short-term a±liation and long-
term bonding: A second-order multi-adaptive neural
agent model, in Proc. 18th Int. Conf. Arti¯cial Intel-
ligence Applications and Innovations, AIAI'22, eds. I.
Maglogiannis et al., Advances in Information and
Communication Technology, Vol. 646 (Springer Na-
ture, 2022), pp. 3757.
Modeling Emerging Interpersonal Synchrony
2350038-41
Int. J. Neur. Syst. 2023.33. Downloaded from www.worldscientific.com
by VRIJE UNIVERSITEIT AMSTERDAM on 09/11/23. Re-use and distribution is strictly not permitted, except for Open Access articles.
... In [27], [47] more mathematical details are shown for the modeling approach used here. For example, it is shown in detail how any smooth adaptive dynamical system can be represented as a self-modeling network. ...
... In the Netherlands, individuals convicted of serious crimes and diagnosed with a mental or personality disorder may be ordered to undergo involuntary treatment in a forensic hospital. This approach has documented prevalence rates as high as 35% among offenders diagnosed with psychopathy [27]. Forensic hospitals in the Netherlands primarily employ cognitive behavioural therapy, with an emphasis on relapse prevention, which has shown positive outcomes. ...
Conference Paper
This article investigates the role of epigenetics in the dynamics of psychopathy by designing and simulating a fifth-order multi-level adaptive network model. Understanding any disorder requires a comprehensive exploration of multiple dimensions. Therefore, this research proposes a computational model that takes into account (epi)genetic, neural, cognitive, and behavioural causal influences that generate psychopathic behaviour. It illustrates the impact of epigenetic influences on the genes MAO-A, 5-HTT, and OXTR and their consequences for other internal processes. This is demonstrated by two simulations that show the difference in behaviour between individuals that at some point in their life develop psychopathy due to environmental circumstances and those not developing it. In addition, an extension of the model demonstrates the impact of cognitive behavioural therapy on an individual with psychopathy. This article includes a review of related literature in this research domain, the structure and specifics of the adaptive network model addressing psychopathic behaviour and the simulation experiments.
... In this section, we discuss the modeling approach for adaptive dynamical systems by their self-modeling temporal-causal network representation as described in [14,28]. This approach has been used for the adaptive dynamical system model introduced here. ...
... In [27] it was analysed how subjective detection of synchrony can play a causal role for bonding. Finally, in [28] the distinction between short-term affiliation and long-term bonding and their relation to synchronisation was introduced. These papers focus on interpersonal synchrony in dyads and do not consider group synchrony and group bonding like the current paper does. ...
Chapter
The research reported here analyses the relationship between group synchrony and group bonding through a novel adaptive computational dynamical system model. By simulating multimodal interactions within a group of four agents, the study uncovers patterns in group cohesion in the sense of emerging multimodal group synchrony and related group bonding. Findings include patterns for emerging group synchrony (logarithmical) and group bonding (logistic). The obtained insights offer an understanding of group interaction dynamics. Future research may consider larger groups and more variations of synchrony detection functions to widen the obtained findings.
... This study explores computational analysis methods to support joint action and memory recall among the elderly through adaptive network modeling and interpersonal synchrony detection. It makes use of previous research on adaptive modeling of multimodal social interaction in [49], [50], [54]. This study simulated interactions among two old adults in three modalities (emotion, singing and movement) and analyzed the memory and interpersonal synchrony status by a multi-order adaptive network model. ...
Conference Paper
This paper explores the potential of adaptive network modeling for joint action and memory recall among elderly through detecting interpersonal synchrony. With the aging population increasing, there is a crucial need to focus on the health and social interaction of older adults. Based on research of the significance of social interaction and memory use for the elderly, as well as the role of interpersonal synchrony in joint action, this paper aims to analyse computationally how to enhance positive effects of social interactions among older individuals by applying an adaptive network model. The research examines the concept of interpersonal synchrony and its impact on joint action, memory, and emotional well-being in elderly populations. Through simulation experiments and analysis, the study demonstrates the potential benefits for music in memory recall for older adults with cognitive decline, highlighting the importance of social interaction and emotional resonance. This study offers a valuable contribution to understanding and improving social interactions and memory recall among the elderly.
... This second-order adaptation level can be used to control the (first-order) adaptation in a contextsensitive manner as addressed by the metaplasticity literature such as (Abraham and Bear, 1996;Robinson et al, 2016;Sjöström et al, 2008). Any smooth dynamical system has a canonical representation as a network and any smooth (multi-order) adaptive dynamical system has a canonical representation as a (multi-order) self-modeling network, as is shown in (Treur, 2021a;Hendrikse et al, 2023). Therefore, compared to adaptive dynamical systems in general, the network-oriented modeling approach used does not introduce any fundamental limitations concerning what it can model. ...
Conference Paper
To conceptualise biological and mental processes, often a dynamical systems perspective is suggested. In addition to dynamics, the structure of the contextual makeup or world configuration (of an organism or brain) plays a crucial role too, as well as adaptivity of the processes. This paper provides a conceptual perspective where the structure, dynamics, and adaptivity of these processes are distinguished and related to each other via adaptive dynamical systems. Moreover, it is shown how networks can be used to represent this conceptual perspective. Here an adaptive dynamical system of any order of adaptivity can be covered where any level can model control over the level below. The approach is illustrated by case studies for higher-order adaptive evolutionary processes. One of these case studies shows a fifth-order adaptive dynamical system that models how due to bad environmental influences at a young age, epigenetic effects can lead to a lifelong mental disorder.
... For the network model introduced in the current paper, still extensions can be incorporated in order to include further effects of the boss who accepts making mistakes and learning from them on employees' personal productivity on tasks, plus the effect on reaching goals. The self-modeling network modeling approach used is a generic adaptive dynamical system modeling approach as it has been shown that any smooth adaptive dynamical system can be represented in a canonical manner as a self-modeling network model [12,13]. Therefore, choosing it does not introduce limitations. ...
Chapter
Although making mistakes is a crucial part of learning, it is still often being avoided in companies as it is considered as a shameful incident. This goes hand in hand with a mindset of a boss who dominantly believes that mistakes usually have negative consequences and therefore avoids them by only accepting simple tasks. Thus, there is no mechanism to learn from mistakes. Employees working for and being influenced by such a boss also strongly believe that mistakes usually have negative consequences but in addition they believe that the boss never makes mistakes, it is often believed that only those who never make mistakes can be bosses and hold power. That’s the problem, such kinds of bosses do not learn. So, on the one hand, we have bosses who select simple tasks to be always seen as perfect. Therefore, also they believe they should avoid mistakes. On the other hand, there exists a mindset of a boss who is not limited to simple tasks, he/she accepts more complex tasks and therefore in the end has better general performance by learning from mistakes. This then also affects the mindset and actions of employees in the same direction. This paper investigates the consequences of both attitudes for the organizations. It does so by computational analysis based on an adaptive dynamical systems modeling approach represented in a network format using the self-modeling network modeling principle.
Conference Paper
This paper provides and analyses a model and simulated data about human-bot interaction to help guide designers in assessing to what extent mapping human-like mental processes and behaviors on their companion bots has valuable effects. The analysis results found here provide a promising perspective for interpersonal emotion regulation using such human-like bots. The bot's mul-timodal interactions helped to regulate the human's emotions by effects of emerging synchrony during the interaction even under less favorable situations such as poor individual emotion regulation capabilities.
Chapter
This paper presents an adaptive network model in the context of joint action and social bonding. Exploration of mechanisms for mental and social network models are presented, specifically focusing on adaptation by bonding based on homophily and Hebbian learning during joint rhythmic action. The paper provides a comprehensive explanation of these concepts and their role in controlled adaptation within illustrative scenarios.
Conference Paper
This research addresses the influence of a changing organisational context and the big five personality traits on the three main characterising elements of a burnout. A computational analysis is contributed based on an adaptive network modeling approach. The simulation results show how someone who is high in personality traits such as agreeableness, openness, extraversion, conscientiousness, and highly sensitive for neuroticism, is vulnerable to reach a burnout level in all dimensions whenever the organisational context is changing in a less favorable direction. It is also shown how therapy alone may not be sufficient as a long-term treatment.
Article
Full-text available
In: Human-Centric Intelligent Systems (https://www.springer.com/journal/44230). For a video presentation, see https://youtu.be/LVXOgybbawA. It has been found that interpersonal synchronization leads to more closeness, mutual coordination, alliance, or affiliation between the synchronized persons. Such literature reveals a pathway from interpersonal interaction to interpersonal synchronisation to interpersonal affiliation. If persons act on temporal patterns of synchrony, this suggests that they possess a facility to detect such patterns. Therefore in this paper the assumption was made that persons indeed detect when temporal patterns of synchrony occur and from such detection a stronger affiliation or connection may grow. By multiple simulation experiments for stochastic stimuli from the environment, it was found that indeed several expected types of patterns are reproduced computationally. An earlier version of this work was presented in an informal manner at the scientific meeting Face2Face: Advancing the Science of Social Interaction at the Royal Society in London (April 4-5, 2022) and at the conference JCRAI'22 (October 14-16, 2022). After the work was finished, further research was performed addressing the interplay of subjective synchrony detection with short-term affiliation and long-term bonding (https://www.researchgate.net/publication/361355421), the use of time lags for subjective synchrony detection (https://www.researchgate.net/publication/362809655), and the role of detected transitions of synchrony for behavioural adaptivity (https://www.researchgate.net/publication/361435085).
Article
Full-text available
The biologically discovered intrinsic plasticity (IP) learning rule, which changes the intrinsic excitability of an individual neuron by adaptively turning the firing threshold, has been shown to be crucial for efficient information processing. However, this learning rule needs extra time for updating operations at each step, causing extra energy consumption and reducing the computational efficiency. The event-driven or spike-based coding strategy of spiking neural networks (SNNs), i.e., neurons will only be active if driven by continuous spiking trains, employs all-or-none pulses (spikes) to transmit information, contributing to sparseness in neuron activations. In this article, we propose two event-driven IP learning rules, namely, input-driven and self-driven IP, based on basic IP learning. Input-driven means that IP updating occurs only when the neuron receives spiking inputs from its presynaptic neurons, whereas self-driven means that IP updating only occurs when the neuron generates a spike. A spiking convolutional neural network (SCNN) is developed based on the ANN2SNN conversion method, i.e., converting a well-trained rate-based artificial neural network to an SNN via directly mapping the connection weights. By comparing the computational performance of SCNNs with different IP rules on the recognition of MNIST, FashionMNIST, Cifar10, and SVHN datasets, we demonstrate that the two event-based IP rules can remarkably reduce IP updating operations, contributing to sparse computations and accelerating the recognition process. This work may give insights into the modeling of brain-inspired SNNs for low-power applications.
Article
Full-text available
Automatic seizure detection is of great significance for epilepsy diagnosis and alleviating the massive burden caused by manual inspection of long-term EEG. At present, most seizure detection methods are highly patient-dependent and have poor generalization performance. In this study, a novel patient-independent approach is proposed to effectively detect seizure onsets. First, the multi-channel EEG recordings are preprocessed by wavelet decomposition. Then, the Convolutional Neural Network (CNN) with proper depth works as an EEG feature extractor. Next, the obtained features are fed into a Bidirectional Long Short-Term Memory (BiLSTM) network to further capture the temporal variation characteristics. Finally, aiming to reduce the false detection rate (FDR) and improve the sensitivity, the postprocessing, including smoothing and collar, is performed on the outputs of the model. During the training stage, a novel channel perturbation technique is introduced to enhance the model generalization ability. The proposed approach is comprehensively evaluated on the CHB-MIT public scalp EEG database as well as a more challenging SH-SDU scalp EEG database we collected. Segment-based average accuracies of 97.51% and 93.70%, event-based average sensitivities of 86.51% and 89.89%, and average AUC-ROC of 90.82% and 90.75% are yielded on the CHB-MIT database and SH-SDU database, respectively.
Book
Full-text available
For presentations on topics in this book, see the following playlist: https://www.youtube.com/playlist?list=PLtJH8O7Bvdyeo1lzDWqx4fsREMQrLS1Gq. Although there is much literature on organisational learning, mathematical formalisation and computational simulation of it is considered a very challenging topic with almost no literature addressing it in a principled manner. This book provides an overview of recent work on mathematical formalisation and computational simulation of organisational learning by exploiting the possibilities of self-modeling network models to address it.
Chapter
For a video presentation, see https://www.youtube.com/watch?v=PRUzrkf1mW4. When people interact, their behaviour tends to become synchronised, a mutual coordination process that fosters short-term adaptations, like increased affiliation, and long-term adaptations, like increased bonding. This paper addresses for the first time how such short-term and long-term adaptivity induced by synchronisation can be modeled computationally by a second-order multi-adaptive neural agent model. This neural agent model addresses movement, affect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The behaviour of the introduced neural agent model was evaluated in a simulation paradigm with different stimuli and communication enabling conditions. The outcomes illustrate how synchrony leads to stronger short-term affiliation which in turn leads to more synchrony and stronger long-term bonding, and conversely.
Conference Paper
For a video presentation, see https://www.youtube.com/watch?v=PRUzrkf1mW4. When people interact, their behaviour tends to become synchronized, a mutual coordination process that fosters short-term adaptations, like increased affiliation, and long-term adaptations, like increased bonding. This paper addresses for the first time how such short-term and long-term adaptivity induced by synchronization can be modeled computationally by a second-order multi-adaptive neural agent model. It addresses movement, affect and verbal modalities and both intrapersonal synchrony and interpersonal synchrony. The behaviour of the introduced neural agent model was evaluated in a simulation paradigm with different stimuli and communication enabling conditions. The outcomes illustrate how synchrony leads to stronger short-term affiliation which in turn leads to more synchrony and stronger long-term bonding, and conversely.
Chapter
In this chapter it is discussed how a personalised temporal-causal network model can be obtained that fits well to specific characteristics of a person, and his or her connections and further context. A model is an approximation, but always a form of abstraction of a real-world phenomenon. Its accuracy and correctness mainly depend on the chosen abstracting assumptions and the personal and contextual (network) characteristics defining the model. Depending on the complexity of the model, the number of its characteristics can vary from just a couple to thousands. These network characteristics usually represent specific features or properties of the modelled phenomenon, for example, for modelling human processes personality traits or social interaction properties. No values for such characteristics are given at forehand. From a more general and abstract view, they can be considered parameters of the model. Estimation of such parameters for a given model is a nontrivial task. In this chapter, it is discussed how this can be addressed for temporal-causal network models based on the parameter tuning method of Simulated Annealing and a specific component within the dedicated modeling environment, thereby making use of MATLAB’s built-in optimser Optimtool.
Chapter
People spontaneously synchronize their mental states and behavioral actions when they interact. This paper models general mechanisms that can lead to the emergence of interpersonal synchrony by multiple agents with internal cognitive and affective states. In our simulations, one agent was exposed to a repeated stimulus and the other agent started to synchronize consecutively its movements, affects, conscious emotions and verbal actions with the exposed agent. The behavior displayed by the agents was consistent with theory and empirical evidence from the psychological and neuroscience literature. These results shed new light on the emergence of interpersonal synchrony in a wide variety of settings, from close relationships to psychotherapy. Moreover, the present work could provide a basis for future development of socially responsive virtual agents.
Article
Epilepsy is a persistent and recurring neurological condition in a community of brain neurons that results from sudden and abnormal electrical discharges. This paper introduces a new form of assessment and interpretation of the changes in electroencephalography (EEG) recordings from different brain regions in epilepsy disorders based on graph analysis and statistical rescale range analysis. In this study, two different states of epilepsy EEG data (preictal and ictal phases), obtained from 17 subjects (18 channels each), were analyzed by a new method called state transfer network (STN). The analysis performed by STN yields a network metric called motifs, which are averaged over all channels and subjects in terms of their persistence level in the network. The results showed an increase of overall motif persistence during the ictal over the preictal phase, reflecting the synchronization increase during the seizure phase (ictal). An evaluation of intermotif cross-correlation indicated a definite manifestation of such synchronization. Moreover, these findings are compared with several other well-known methods such as synchronization likelihood (SL), visibility graph similarity (VGS), and global field synchronization (GFS). It is hinted that the STN method is in good agreement with approaches in the literature and more efficient. The most significant contribution of this research is introducing a novel nonlinear analysis technique of generalized synchronization. The STN method can be used for classifying epileptic seizures based on the synchronization changes between multichannel data.
Article
Brain–computer interfaces (BCIs) can be used in real-time fMRI neurofeedback (rtfMRI NF) investigations to provide feedback on brain activity to enable voluntary regulation of the blood-oxygen-level dependent (BOLD) signal from localized brain regions. However, the temporal pattern of successful self-regulation is dynamic and complex. In particular, the general linear model (GLM) assumes fixed temporal model functions and misses other dynamics. We propose a novel data-driven analyses approach for rtfMRI NF using intersubject covariance (ISC) analysis. The potential of ISC was examined in a reanalysis of data from 21 healthy individuals and nine patients with post-traumatic stress-disorder (PTSD) performing up-regulation of the anterior cingulate cortex (ACC). ISC in the PTSD group differed from healthy controls in a network including the right inferior frontal gyrus (IFG). In both cohorts, ISC decreased throughout the experiment indicating the development of individual regulation strategies. ISC analyses are a promising approach to reveal novel information on the mechanisms involved in voluntary self-regulation of brain signals and thus extend the results from GLM-based methods. ISC enables a novel set of research questions that can guide future neurofeedback and neuroimaging investigations.