Conference PaperPDF Available

Performance Related Energy Exchange in Haptic Human-Human Interaction in a Shared Virtual Object Manipulation Task

Authors:

Abstract and Figures

In order to enable intuitive physical interaction with autonomous robots as well as in collaborative multi-user virtual reality and tele- operation systems a deep understanding of human-human haptic interaction is required. In this paper the effect of haptic interac- tion in single and dyadic conditions is investigated. Furthermore, an energy-based framework suitable for the analysis of the under- lying processes is introduced. A pursuit tracking task experiment is performed where a virtual object is manipulated, jointly by two hu- mans and alone. The performance in terms of the root-mean-square tracking error is improved in dyadic compared to individual con- ditions, even though the virtual object mass is reduced to one half in the latter. Our results indicate that the interacting partners ben- efit from role distributions which can be associated with different energy flows. 1I NTRODUCTION
Content may be subject to copyright.
Performance Related Energy Exchange in Haptic Human-Human
Interaction in a Shared Virtual Object Manipulation Task
Daniela Feth Raphaela Groten Angelika Peer Sandra Hirche §Martin Buss
Institute of Automatic Control Engineering, Technische Universit¨
at M¨
unchen, Germany
ABSTRACT
In order to enable intuitive physical interaction with autonomous
robots as well as in collaborative multi-user virtual reality and tele-
operation systems a deep understanding of human-human haptic
interaction is required. In this paper the effect of haptic interac-
tion in single and dyadic conditions is investigated. Furthermore,
an energy-based framework suitable for the analysis of the under-
lying processes is introduced. A pursuit tracking task experiment is
performed where a virtual object is manipulated, jointly by two hu-
mans and alone. The performance in terms of the root-mean-square
tracking error is improved in dyadic compared to individual con-
ditions, even though the virtual object mass is reduced to one half
in the latter. Our results indicate that the interacting partners ben-
efit from role distributions which can be associated with different
energy flows.
1I
NTRODUCTION
As robots become gradually part of our daily life, ways have to be
found to enable intuitive physical human-robot interaction. This
is relevant whether the robot is an autonomous assistant, is used
to extend the human work space as in teleoperation, or is a virtual
partner. Interaction is defined as the bidirectional causal exchange
of signals between a human and a robot. Haptic interaction is based
on the exchange of force and velocity signals between the partners,
i.e. involves the human haptic perception and motor system at the
same time. Whenever human and robot interact in a haptic way, the
partners are connected either limb-to-limb (e.g. holding hands) or
indirectly via a physical link (e.g. an object).
Due to this close physical coupling, the partners are able to adapt
their behavior continuously to each other which makes causality
analysis challenging. This explains why a pure replay of recorded
human signals is not successful as shown in [8]. Hence, it is desir-
able to find a model of haptic interaction which can be implemented
on a robot to enable it to adapt to the human behavior and receive
and send the most relevant haptic signals. So far only little is known
about the characteristics of haptic human-human interaction (HHI)
and haptic interaction models [14], [6].
Known studies on haptic interaction describe behavior changes
in partner trials compared to individual trials by performance mea-
sures. Performance is increased when interacting with a partner [1],
[5]. These performance differences motivate research on individual
behavior in interaction in contrast to single task behavior. However,
those performance measures provide no detailed description on the
interaction itself but only on its effects.
email: daniela.feth@tum.de
email: r.groten@tum.de
email: angelika.peer@tum.de
§email: s.hirche@ieee.org
email: m.buss@ieee.org
In literature, there are various attempts to explain this effect, e.g.:
Social facilitation: People tend to try harder to achieve a task
just because there is another person in the room watching
them [11].
Human biomechanical system: When in haptic interaction
participants might constantly push and pull against each other
such that their muscles are in a prestressed state. This might
allow faster reaction of their motor system and result in a
higher accuracy [10].
Lower individual required forces: Manipulating a certain ob-
ject in a desired way, the necessary overall force remains the
same, independent if one or two people act on it. In case of
an interacting couple each partner has to apply less force than
a single person to achieve the same performance and, hence,
the task is physically easier to execute and performance is in-
creased. That this explanation is not true Reed [9] showed for
his task by introducing a condition where the inertia of the
object was halved.
Roles or strategies: Individuals within a dyad focus on spe-
cific aspects of the task, which results in a smaller amount of
required actions [4]. In [9] such strategies are identified. They
distinguish between specialized and non-specialized couples
in a pointing task. In the specialized case one partner is ac-
celerating and, at the same time, the other one decelerating a
common object. In the non-specialized case both partners act
in the same way.
Except for Reed’s discrete role definitions [7], no measures could
be found to actually describe the underlying processes of haptic
interaction. Therefore, a framework to describe the individual
behavior within an interacting dyad is missing.
In this paper we introduce a theoretic framework based on en-
ergy flows as a way to approach this topic. Haptic interaction is
determined by the exchange of velocity and force signals. There-
fore, the energy flow, which considers the applied forces as well
as the velocity, is an appropriate measure to describe the behavior
in such tasks. Additionally, models based on energy flows have
been introduced in telemanipulation for system analysis and con-
trol design, e.g. Port-Hamiltonian systems [12]. No previous work
applies such energy-based models to describe the behavior of hap-
tic human-human interaction in a joint object manipulation task.
There, the link between behavioral studies and system-theoretic ap-
plications is missing. This paper is supposed to help to establish
such a link.
As already mentioned, few studies compare single person perfor-
mances and dyadic performances to gain insight into haptic interac-
tion [1], [5]. Those studies use pointing or cyclical movement tasks
to study haptic interaction. Haptic tasks involve motion trajectories
and the related forces. If two partners carry out the task collabora-
tively, they have to find a common trajectory for the object or the
interaction point of their hands. Haptic interaction allows them to
negotiate on their common trajectory. Because the planned trajec-
tories in haptic interaction tasks (e.g. dancing, one partner assisting
the other with carrying a bulky object) are not directly measureable,
we abstract them to a virtual tracking task scenario and, thus, are
able to study the negotiation in an experimental design. To our best
knowledge pursuit tracking tasks involving a virtual object as a cur-
sor, have not been used to study the fundamentals of shared object
manipulation. This setup allows to investigate single trials as well
as dyadic haptic interaction trials.
Interaction can be the basis of either collaboration or com-
petition. Here, we focus on collaboration which involves the
negotiation of intentions, consisting of common goals, strate-
gies and action plans [13]. ”Interaction entails only acting on
someone or something else, collaboration is inherently ”with”
others; working (labore) jointly with (co).” [2]. We distinguish
between lower level collaboration and higher level collaboration in
dependence of the amount of information concerning action plans
and goals communicated exchanged between partners. Because
haptic interaction/collaboration is not yet a well studied subject,
we will keep the intentions constant and thus study a lower level
of collaboration where the trajectory is given and does not have to
be negotiated. In this case some collaboration is still of advantage:
the two partners should find an optimal strategy to combine their
inputs (forces/positions) to the common scenario.
The goal of this study is to present experimentally gained
information on performance in shared object manipulation by
comparing single persons and rigidly coupled dyads. A tracking
task scenario is introduced as simplified interactive task. The
energy flow framework is applied to the behavioral data.
The paper is structured as follows: After introducing our re-
search questions in section 2 the experimental design is described
followed by a detailed description of the included measurements in
sections 3 and 4. We present results on performance and energy
flows in section 5 and end with a conclusion.
2H
YPOTHESIS &RESEARCH QUESTION
In order to obtain the specifications of a HHI model we analyze a)
the effects of HHI as well as b) aspects of its underlying processes.
In literature different human behavior for single and haptic
interaction conditions is reported [1], [5]. Based on a performance
measure, we analyze the behavior of an interacting couple in
comparison to a single person performing the same pursuit tracking
task. We expect tracking performance to be better in partner
condition than in single condition.
With respect to [9] we raise the question whether increased task
performance in the partner condition p) (also called interaction
or dyadic trials throughout the remainder of this paper) is due to
reduced, necessary individual forces. Therefore, we differentiate
between two single conditions, one where the required forces
are the same as in the dyadic condition (af) and one where they
are halved (ah). More details on how this is achieved follow in
section 3.
If reduced, necessary individual forces are no explanation for
an expected increased performance in dyadic trials, we can assume
that a different advantage is taken of interaction. To describe this
advantage an energy flow framework is introduced in section 4.2.
We strive to explain the experimentally gained data on human inter-
action behavior in shared object manipulation on the basis of energy
exchange between partners.
virtualenvironment
haptic
interface
haptic
interface
human
operator
1
human
operator
2
objectobject
avatars
x1
f1
x2
f2
Figure 1: Interaction with virtual environment via haptic interfaces in
case of the dyadic condition
x
z
x
z
Linear haptic interface 1 Linear haptic interface 2
Hand knobs
Virtual
object
Figure 2: Experimental setup consisting of two linear haptic inter-
faces (linked by the virtual mass) and two screens with the graphical
representation of the tracking path
3EXPERIMENT
The following section will introduce details on the task, the exper-
imental setup and the experimental description including partici-
pants, design and procedure. In this experiment participants had to
perform a pursuit tracking task either on their own or in interaction
with a partner. In the latter case the two partners were linked by a
virtual object (see Fig. 1) and thus exchanged haptic signals.
3.1 Experimental Setup
The graphical representation of the path was implemented in C++.
The path was visualized as a white line on a screen and participants
were asked to follow this path as accurately as possible with a red
ball representing a virtual mass as the path was scrolling down the
screen with a constant velocity of ˙z=15 mm/s. The overall path
length was kept constant consisting of repeated components such
as triangles, curves, straight lines and jumps (see Fig. 2). The order
of the path components was randomized between trials to prevent
learning effects. One trial took tfinal =161 s. The horizontal posi-
tion of the red ball renders the position of either one haptic interface
or both haptic interfaces depending on the condition. As shown
in Fig. 2 the two 1 DOF linear haptic interfaces (designed at our
lab) are each equipped with force sensors (Burster, model 8524),
wooden hand knobs and linear actuators (Copley Controls Corp.,
Thrusttube module, motor type 2504). These haptic interfaces are
characterized by their high rigidity and force capability.
The control of the linear haptic interfaces is implemented in Mat-
lab/Simulink and executed on the Linux Real-Time Application In-
terface (RTAI). The graphical representation of the path runs on
another computer and communication is realized by a UDP con-
nection in a local area network.
The control is designed to model the mechanical properties of the
virtual object. In Fig. 1 the model of the virtual object is introduced
and the relevant forces and positions are defined. The motion of the
virtual object is in 1 DOF and its dynamics is modelled according
to Newton’s law
fsum(t)= f1(t)+ f2(t)= (1)
=m¨xvo(t)+b˙xvo(t)+kxvo(t)
where fsum is the sum of the forces applied by the participant/s, m,
band kare the virtual mass, damping and stiffness, respectively and
linear haptic
interface 1
PD control
admittance
G(s)
-
linear haptic
interface 2
PD control
-
xvo
f1
x1
x2
f2
fsum
Figure 3: Position-based admittance control of the linear haptic inter-
faces in the dyadic condition
¨xvoxvo, and xvo are the desired acceleration, velocity and position
of the virtual object and, hence, of the linear haptic interfaces. In
this experiment band kare set to zero and only a virtual mass mis
implemented. Hence, the transfer function in the Laplace domain
of the virtual model simplifies to
G(s)= Xvo(s)
Fsum(s)=1
ms2(2)
and is implemented in the “admittance” block in Fig. 3.
A low level PD controller is used to control the actual positions
of the haptic interfaces x1(t)and x2(t)to the position of the virtual
object xvo(t). It compensates for external forces and friction. Tak-
ing into account the high-gain position control it can be assumed
that xvo(t)=x1(t)=x2(t)and the transfer function of the overall
system consisting of the virtual model and the haptic interfaces can
be written as
G(s)= X1(s)
Fsum(s)=X2(s)
Fsum(s)=1
ms2.(3)
This setup allows not only the measurement of the resulting force
fsum =f1+f2but also of the individual forces f1and f2applied
by the participants. This is an important aspect for the experimental
data analysis.
When participants performed the tracking task on their own, they
are seated in front of one of the haptic interfaces. This means f2=0
for the single conditions.
3.2 Description of Design, Procedure and Participants
In the presented experiment 24 participants took part. Mean age
was 27 years (std. deviation: 2.7 years). The participants were as-
signed to six groups of four people, each including two males and
two females. Otherwise the assignment to groups was random. This
round robin design [3] involved that each partner interacted with the
other three members of the group. The chosen design resulted in 6
dyadic data sets per group.
With respect to the research question we compare three within-
subject conditions:
1) condition ”with partner” (p)
2) condition ”alone with full mass” (af) and
3) condition ”alone with half mass” (ah),
where the full mass was chosen to be 20 kg.
For each participant two single trials (af and ah) and three hap-
tic interaction trials with different partners (p, e.g. A with B, C and
D) were recorded (repeated measurement). Within the round robin
design, we balanced the order of conditions to control for sequence
effects. To standardize the test situation further we undertook the
following arrangements: participants not taking part in the on-going
trial had to wait outside the laboratory; a wall was placed between
the two participants so they did not gain visual information about
their partners’ movements; participants used their right hand to per-
form the task (all of the participants were right-handed); partici-
pants were not allowed to speak to each other during the experi-
ment; white noise was played on the headphones worn by partici-
pants, so the noise of the moving haptic interface would not distract;
Human arm Virtual object
Figure 4: Mechanical model of single human operator interacting with
the virtual object
the position (left or right seat) was randomized with the order of ex-
perimental condition and participants; the order of the experimental
conditions was randomized.
In addition to a general instruction at the beginning of the experi-
ment, the participants had a test-curve at the beginning of each trail.
This curve was not part of the analysis. Participants were informed
beforehand about the upcoming condition.
4M
EASURES
After introducing the performance measure, we give details on the
energy-flow framework.
4.1 Performance Measure
In order to analyze the performance in the three different condi-
tions, we evaluate the root-mean-square error between the virtual
object position and the reference path.
RMSx=N
i=1(xref ,ixvo,i)2
N(4)
where N is the the number of samples per trial.
4.2 Energy Flow
We would like to gain a deeper understanding of the underlying
processes of haptic interaction to be able to derive an interaction
model. As we consider haptic human-human interaction to be
connected to energy exchange between the human operators we
evaluate the energy flow (i.e. the power) between the different
involved subsystems (human arm/s, virtual object) in our experi-
ment.
A simple mechanical model is used to define and explain the
energy flow between the subsystems. Although there is no haptic
human-human interaction in the single conditions and we do not
evaluate the energy flow in these conditions, we present the me-
chanical model to introduce the basic principle and assumptions
made in this simple scenario. Next, we introduce the more complex
case of the partner condition.
In the single conditions (ah,af) two subsystems are defined: the
human arm and the virtual object (see Fig. 4). The human arm is
described by a mass-spring-damper model that is connected rigidly
to the virtual object. Based on this mechanical model the energy
flow between the subsystems is
P1(t)= f1(t)˙xvo(t)t[0;tfinal](5)
with f1the force applied by the human operator on the haptic in-
terface and ˙xvo the velocity of the virtual object. The direction of
the force and velocity vectors is defined in such a way that energy
injected by the human to the virtual object has a negative sign and
energy absorbed by the human from the virtual object a positive
one.
Human arm 1 Virtual object Human arm 2
Figure 5: Mechanical model of interacting couple
In this model we neglect friction and assume the virtual ob-
ject to be ideally rigid, because of the high-gain position control.
For this reason and because of energy conservation all the en-
ergy injected/absorbed by the human arm to/from the virtual ob-
ject results in a change of its kinetic energy dEkin/dt (accelera-
tion/deceleration):
dEkin(t)
dt +P1(t)=0t[0;tfinal](6)
with dEkin(t)
dt =m¨xvo(t)˙xvo(t)t[0;tfinal](7)
Furthermore, as neither of the participants touches the knobs in
the beginning and at the end of the trials and the virtual mass is
not moving in these time instants, i.e. ˙xvo(t=0)= ˙xvo(t=tfinal)=
0 m/s, the kinetic energy Ekin(t=0)=Ekin(t=tfinal )=0 J in the
system can be considered to be zero in these moments. Thus, ac-
cording to energy conservation laws, energy once injected by the
participant, has to be absorbed by her/him in a later instant. Hence,
the mean energy flow between the operator and the virtual object is
0 J/s over the whole trial
P1=tfinal
0P1(t)dt =0 J/s.(8)
In the partner condition (p) the situation is more complex. As
depicted in Fig. 5, here, three subsystems are defined: human arm 1,
virtual object and human arm 2 with the respective energy flows
P1(t)= f1(t)˙xvo(t)t[0;tfinal](9)
and P2(t)= f2(t)˙xvo(t)t[0;tfinal].(10)
This represents a classical 2-port architecture where an energy flow
occurs between the virtual object and each of the human arms.
Again, for reasons of energy conservation, energy injected in every
time instance by one partner is either converted to kinetic energy of
the object or absorbed by the other partner and vice versa
P1(t)+P2(t)+ Ekin(t)
dt =0t[0;tf inal ](11)
where dEkin(t)/dt is defined in accordance with the single condi-
tion (7).
It is assumed that the virtual object is not moving in the begin-
ning and the end of the trial Ekin(t=0)=Ekin(t=tfinal)=0J.
Hence, because of energy conservation laws, energy once injected
by either of the interaction partners has to be also absorbed by ei-
ther of them. Hence, the mean energy flow of the two interacting
partners is zero P1(t)+P2(t)=0.(12)
However, this does not imply that the mean energy flows P1,P2are
zero, because from equation (12) it follows
P1=P2.(13)
In order to interpret this equation, two cases have to be distin-
guished:
CASE 1: P1=P2=0
In this case two conclusions are drawn. First, on average one part-
ner is injecting more energy to the virtual object than absorbing
from it (P<0). Second, with respect to the overall trial, the exes-
sive energy that is injected by one of the partners has to be absorbed
by the other one, e.g. P1<0P2>0. Hence, on average there is
an energy flow from one partner to the other via the virtual object.
CASE 2: P1=P2=0
Here, the partners inject on average the same energy to the virtual
object as they absorb from it. However, if we consider every time
instance and not the mean value, it is still possible that energy in-
jected by one partner is absorbed by the other one.
We consider one other remark worth mentioning, where we con-
sider the energy flow in every time instance.
An energy flow from one partner to the other occurs if and only
if one of the interaction partners is injecting energy to the system
while the other one is absorbing it, i.e.
sgn(P1(t)) =sgn(P2(t)) energy flow from one partner to
the other via the object. (14)
In this case, in every time instance the energy flow between the in-
teraction partners via the virtual object equals the smaller one of the
energy flows P1(t)or P2(t).
Otherwise, i.e. if sgn(P1(t)) = sgn(P2(t)) or if either of P1(t)and
P2(t)is zero, there is no energy flow between the interaction part-
ners. The partners’ energy flows contribute only to the kinetic en-
ergy of the virtual object.
In summary, we state the following four important points that
are crucial for the analysis of the energy flows:
1) We assume that the virtual object is not moving in the beginning
and the end of the trial Ekin(t=0)=Ekin(t=tfinal)=0J.
2) Because of 1) and energy conservation, energy injected to the
system representing the virtual object in one time instance has to
be released by it in a later instance.
3) The energy injected by one of the partners in one time instance
must not be absorbed necessarily by the same person in a later
instance. An indirect energy flow from one partner to the other
can take place via the object.
4) In every time instance, each of the interaction partners can
either inject energy to the virtual object (P(t)<0) or absorb
energy from it (P(t)>0).
Finally, the mass-spring-damper model of the human arm helps
to interpret how the energy is injected/absorbed by the human
operator. However, it is important to note that a) this simple
model of the human arm does not describe the complex processes
in the human arm completely and b) we cannot determine the
different energies of the human arm explicitly. Hence, we cannot
distinguish if there is an energy flow between the two partners
and the virtual object at the human-object interfaces because they
pull/push against each other (potential energy) or because one
partner is generating energy in his/her muscles while the other one
is dissipating in the muscle’s viscosity (dissipative energy). What
we can conclude is that an energy flow from/to the virtual object
to/from the human arm results in an in-/decrease of either potential
energy or dissipative energy.
As we assume HHI to be connected to an energy exchange be-
tween the collaborating partners the mean energy flows between the
three subsystems P1and P2are analyzed for the partner condition.
5R
ESULTS &DISCUSSION
The following results are based on analyses taking into account the
mean performance RMSxand mean energy flow measures. First,
p af ah
2.5
3
3.5
4
4.5
5x 10−3
condition
RM Sx/m
Figure 6: Performance analysis by RMSx(Mean and standard error)
we analyse if tracking performance is different in partner and sin-
gle trials, focusing on the effect of different masses in the single
conditions. Next, the energy flows are evaluated in order to gain an
insight in the underlying processes of HHI.
5.1 Performance
As mentioned in the previous section, participants performed the
task in three different conditions, two of them on their own (af,ah)
and one in haptic interaction with a partner (p).
There is one statistical challenge. Due to the chosen round robin
design, our measurement data sets are not independent variables,
because one person interacted in several dyads. For this reason,
most of the classic statistical methods are not applicable. To cir-
cumvent this problem, we chose a subset of 12 independent data
sets by analyzing only two dyads from each group (both mixed-
gender).
Performance is increased for the partner condition (mean: 3.10
mm, standard error: 0.09 mm) compared to both single conditions,
as depicted in Fig. 6. In single conditions participants performed
with lower RMSxwhen they had to move only half the weight
(mean: 3.94 mm, standard error: 0.18 mm) of the virtual mass
compared to the full mass (mean: 4.68 mm, standard error: 0.18
mm). A repeated measurement ANOVA showed significant influ-
ence of the factor ”tracking condition” on the performance mea-
sure (F(2,22)=30,729; p<0.000; partial
η
2=0.736). Bonfer-
roni adjusted pairwise comparisons revealed significant differences
(p<0.05) between all three levels (p,af,ah) of the factor.
While it is not surprising that participants showed higher per-
formance in terms of RMSxwhen dealing with lower virtual mass
in the single tracking conditions, they performed even better, when
they interacted with a partner. Our hypothesis that the RMSxis
lower in interaction trials can be confirmed. Performance in the
dyadic trials is even better than in the half mass trials. Thus, per-
formance difference in single and dyadic tasks has to be due to in-
teraction instead of the reduction of necessary individual forces as
considered in our research questions. There, we attempt to describe
this interaction by energy flows.
5.2 Energy Flow between Interacting Partners
Our goal of the energy-flow analysis is to determine which of the
two cases introduced in the previous section is true for our experi-
ment. Before approaching this, we have to check if our assumption
of a lossless system can be verified and equations (8) and (13) are
satisfied (units are J/s):
P1+P2(pcondition): mean = -1.62e-4; std. deviation = 1.07e-4
−0.03 −0.02 −0.01 0 0.01 0.02 0.03
0
2
4
6
8
10
P1,P2/J/sperperson
Frequency
Figure 7: Histogram of P1and P2in pcondition and the reference
interval (blue lines)
P1(af condition): mean = -5.14e-5; std. deviation = 5.16e-5
P1(ah condition): mean = -1.72e-5; std. deviation = 1.85e-5
We note that the equations (8) and (13) are not satisfied anymore,
but consider the differences of the mean values from 0 to be caused
by measurement errors and uncompensated friction. Thus, we think
of our system to be lossless in good approximation.
This causes problems to distinguish between CASE 1 and
CASE 2: If P1and P2are unequal to 0 (as considered in CASE 1),
we cannot seperate anymore if this is caused by the above men-
tioned measurement errors (disturbances) or if CASE 1 is actually
true. It is problematic to falsify CASE 2.
To approach this, we have to gain knowledge if it is more prob-
able that a value of the individual energy flows P1and P2in the
partner condition is explained by the disturbance distribution (de-
scribed here by the above listed mean values and standard devia-
tions) or that the value belongs to a different population. Not only
because it is theoretically appealing that the sources of disturbances
are identical in the two single conditions, but also because the mean
values do not differ significantly (paired-sample: t(10) = -1.938,
p(two-tailed) = 0.081, one pair excluded from analysis because the
energy value in af-condition was more than two standard errors
away from the mean), we treat the two single conditions as one
sample to compare it with the individual energy flows in the partner
condition. Due to the non-independency of our measurement data
and the non-Gaussian distribution we are unaware of any statisti-
cal procedure which allows to test for this. Therefore, we decided
for the following procedure. We determined the overall minimum
and maximum value mind, maxdof the disturbance distribution and
define [mindmaxd]=[0.6 mJ/s 0.005 mJ/s]to be the reference
interval. If a value of P1or P2in the partner condition lies within
this interval, we assume its deviation from 0 to be explained by
measurement errors and CASE 2 is given. If the value is outside
the interval, we interpret this as indication for CASE 1. To illus-
trate this, the histogram of P1and P2in the partner condition and
the chosen reference interval are presented in Fig. 7. Because 97%
of the P1and P2in the partner condition are outside the reference
interval, we conclude that CASE 1 is an appropriate description of
our data.
This means, over the whole trial, in each couple one of the part-
ners is injecting more energy to the virtual object than he/she is
absorbing while the other partner is absorbing more energy than
he/she is injecting. On average, there is an energy flow from one
partner to the other over the virtual object. However, as introduced
in section 4 it is not possible to distinguish if the energy flow be-
tween the virtual object and the human arms is measured because
the interacting partners push/pull against each other to feel each
other or because one partner is generating energy while the other
one is dissipating energy what could be interpreted as a role alloca-
tion.
Finally, the data is spread over a large intervall which indicates
that haptic interaction is not the same for each couple. We assume
that this variation is caused by different behavioral characteristics
of the interacting partners.
6C
ONCLUSION
To evaluate the effect of haptic interaction on human behavior in
a joint pursuit tracking task, we analysed performance in single
as well as partner trials. Results are based on three different
conditions: with a partner, alone with the same mass as in the
interaction trials and with half of the mass. In accordance to [9],
[1], we can confirm our hypothesis that performance is increased in
the “partner” condition. Thus, we can generalize the performance
related results of pointing tasks and cyclic motions to joint pursuit
tracking tasks. Because interaction in the “partner” condition
was even better than in the “alone-half-mass” condition, it is
concluded that the improved task performance in dyadic trials is
not only a result of force reduction for the individual but different
explanations have to be considered, which were presented in the
introduction.
One of these explanations is the existence of different roles of
the haptic interacting partners. Based on a mechanical model en-
ergy flows are introduced to approach the challenge of defining dif-
ferent interaction behavior. Furthermore, this framework provides
the theoretical background for an energy-based model, like e.g. a
Port-Hamiltonian system.
An evaluation of the mean energy flows between the interacting
partners P1and P2revealed, on average, there is an assymmetric
energy flow between the partners via the virtual object. This is
shown by the fact, that in each trial of the “partner” condition there
is one partner who is on average injecting energy, while the other
one is absorbing energy. The cause of energy flow between the
virtual object and the human arms is either the interaction partners
pushing/pulling against each other or one partner generating energy
while the other partner is dissipating energy. The latter case would
be a role allocation where one partner could be modeled on average
as a source and the other one as a sink. This would be of special
interest for the energy-based model we would like to derive.
Furthermore, the energy flows are not constant for all interacting
couples but are spread over a large interval. This indicates that
haptic interaction varies between different couples. We explain
this to be caused by the individual behavior characteristics of
the partners. A model of haptic interaction has to comprise
these different types of interaction couples. However, time series
analysis is required to allow further benefit from the energy-flow
framework in haptic interaction.
In future, other explanations for the benefit of interaction in
shared object manipulation contrasting individual performance
have to be addressed. Out of the explanations offered in literature
and listed in section 1, this paper addressed two. Further studies
may investigate the social facilitation factor. We plan to focus on
the influence of the human biomechanical system by experimen-
tally manipulating aspects which affect this system.
ACKNOWLEDGEMENTS
This work is supported in part by the German Research Founda-
tion (DFG) within the collaborative research center SFB453 ”High-
Fidelity Telepresence and Teleaction” and the ImmerSence project
within the 6th Framework Programme of the European Union, FET
- Presence Initiative, contract number IST-2006-027141. For the
content of this paper the authors are solely responsible for, it does
not necessarily represent the opinion of the European Community,
see also www.immersence.info
We acknowledge the valuable contributions of Georg Baetz.
REFERENCES
[1] S. Gentry, E. Feron, and R. Murray-Smith. Human-human haptic col-
laboration in cyclical fitts’ tasks. In IEEE/RSJ International Con-
ference on Intelligent Robots and Systems (IROS), 2-6 August 2005,
Cambridge, MA, USA., pages 3402–3407, 2005.
[2] B. J. Grosz. Collaborative systems. American Association for Ar-
tificial Intelligences National Conference on Artificial Intelligence,
2(17):67–85, 1996.
[3] D. A. Kenny. The design and analysis of social-interaction research.
Annu. Rev. Psychol., 47:59–86, 1996.
[4] G. Knoblich and J. S. Jordan. Action coordination in groups and indi-
viduals: Learning anticipatory control. Journal of Experimental Psy-
chology: Learning, Memory, and Cognition, 29:1006–1016, 2003.
[5] K. Reed, M. Peshkin, J. E. Colgate, and J. Patton. Initial studies in
human-robot-human interaction: Fitts’ law for two people. In Inter-
national Conference of Robotics and Automation, 2004.
[6] K. Reed, M. Peshkin, M. J. Hartmann, M. Grabowecky, J. Patton,
and P. M. Vishton. Haptically linked dyads. Are two motor-control
systems better than one? Psychological science, 17(5):365–366, 2006.
[7] K. B. Reed. Understanding the haptic interactions of working to-
gether. PhD thesis, Northwestern University, 2007.
[8] K. B. Reed, J. Patton, and M. Peshkin. Replicating human-human
physical interaction. In IEEE International Conference on Robotics
and Automation, volume FrB4.1, 2007.
[9] K. B. Reed, M. Peshkin, M. J. Hartmann, J. E. Colgate, and J. Patton.
Kinesthetic interaction. In IEEE International Conference on Robotics
and Automation (ICRA), New Orleans, April 2004, 2005.
[10] D. J. Reinkensmeyer, P. S. Lum, and S. L. Lehman. Human control of
a simple two-hand grasp. Biological Cybernetics, 67:553–564, 1992.
[11] B. H. Schmitt, T. Gilovich, N. Goore, and L. Joseph. Mere presence
and social facilitation: One more time. Journal of Experimental Social
Psychology, 22:242–248, 1986.
[12] C. Secchi, S. Stramigioli, and C. Fantuzzi. Control of Interactive
Robotic Interfaces: A Port-Hamiltonian Approach (Springer Tracts
in Advanced Robotics). Springer-Verlag New York, Inc., Secaucus,
NJ, USA, 2007.
[13] M. Tomasello, M. Carpenter, J. Call, T. Behne, and H. Moll. Un-
derstanding and sharing intentions: The origins of cultural cognition.
Behavioral and Brain Sciences, 28:675–735, 2005.
[14] Z. Wang, J. Yuan, and M. Buss. Modelling of human haptic skill:
A framework and preliminary results. In 17th IFAC World Congress,
July 6-11, 2008, Seoul, Korea, 2008.
... If we applied common methods of inferring emergent-i.e. not fixed a priori-roles and control strategies, we may incorrectly infer leadership roles based on signs/directions of interaction force/torque 33,34,40,41 and power 42 or based on which person moved earlier, when in actuality these trends reflect whether the task is to decrease or increase the novice' step frequency. Other time-domain methods for identifying roles-e.g. ...
Article
Full-text available
Physical human–robot interactions (pHRI) often provide mechanical force and power to aid walking without requiring voluntary effort from the human. Alternatively, principles of physical human–human interactions (pHHI) can inspire pHRI that aids walking by engaging human sensorimotor processes. We hypothesize that low-force pHHI can intuitively induce a person to alter their walking through haptic communication. In our experiment, an expert partner dancer influenced novice participants to alter step frequency solely through hand interactions. Without prior instruction, training, or knowledge of the expert’s goal, novices decreased step frequency 29% and increased step frequency 18% based on low forces (< 20 N) at the hand. Power transfer at the hands was 3–700 × smaller than what is necessary to propel locomotion, suggesting that hand interactions did not mechanically constrain the novice’s gait. Instead, the sign/direction of hand forces and power may communicate information about how to alter walking. Finally, the expert modulated her arm effective dynamics to match that of each novice, suggesting a bidirectional haptic communication strategy for pHRI that adapts to the human. Our results provide a framework for developing pHRI at the hand that may be applicable to assistive technology and physical rehabilitation, human-robot manufacturing, physical education, and recreation.
... Humans perform collaborative tasks in a wide range of everyday actions [1], [2], [3], [4], [5], [6], [7], [8], from moving furniture together or tango dancing, to more asymmetric collaborations such as a therapist assisting a patient in physical training or a violin teacher holding their student's arm to demonstrate how to perform bowing movements. These collaborations rely on haptic communication where recent results suggest that the exchange of haptic information between connected humans improves the performance of both partners [9], [10], [11] and their motor learning [12]. ...
... We de ne "haptic communication" as the exchange of information through sensory feedback elicited from physical contact 33 . Haptic communication has been used to explain the bene ts of human-human hand interactions for performing several types of upper-limb tasks [34][35][36][37][38][39][40][41][42][43] . Our results suggest that the signs of interaction force and power may encode the direction (increase or decrease) in which to alter step frequency. ...
Preprint
Full-text available
Physical human-robot interactions (pHRI) often provide mechanical force and power to aid and alter human walking without requiring voluntary effort from the human. Alternatively, we propose that principles of physical human-human interactions (pHHI) can inspire pHRI that aids walking by engaging human sensorimotor processes. We hypothesize that low-force hand interactions can intuitively induce people to alter their own walking. Our experiment paradigm is based on partner dancing: an expert partner dancer influences novice participants to alter step frequency solely through hand interactions. Without prior instruction or training, novices decreased step frequency by 29% and increased step frequency 18% based on low forces (< 20 N) at the hands. Power transfer at the hands was 10-100x smaller than that exerted by the lower limbs to propel locomotion, suggesting that the expert did not mechanically alter the novice’s gait. Instead, the direction of hand forces and power may communicate information about desired walking patterns. Finally, the expert altered arm stiffness to match that of the novice, offering a design principle for pHRI to alter gait. Our results provide a framework for developing pHRI with wide-ranging applications, including assistive technology and physical rehabilitation, human-robot manufacturing, physical education, and recreation.
... However, this does not necessarily lead to worse performance [5]. In fact, compared to solo performance, human-human interaction can result in better tracking accuracy [4], [6], [7]. Similarly, compared to bimanual interaction, dyads have achieved faster motions in both discrete [8] and cyclical [9] aiming tasks and displayed similar performance and adaptation rates during rhythmic tasks [10]. ...
Article
Full-text available
During daily activities, humans routinely manipulate objects bimanually or with the help of a partner. This work explored how bimanual and dyadic coordination modes are impacted by the object's stiffness, which conditions inter-limb haptic communication. For this, we recruited twenty healthy participants who performed a virtual task inspired by object handling, where we looked at the initiation of force exchange and its continued maintenance while tracking. Our findings suggest that while individuals and dyads displayed different motor behaviours, which may stem from the dyad's need to estimate their partner's actions, they exhibited similar tracking accuracy. For both coordination modes, increased stiffness resulted in better tracking accuracy and more correlated motions, but required a larger effort through increased average torque. These results suggest that stiffness may be a key consideration in applications such as rehabilitation, where bimanual or external physical assistance is often provided.
... where v is the velocity of frame B. Similar metrics exist in the literature, where authors in [9], [22] use power in a scalar form to distinguish interaction patterns in the virtual environment. Note that, positive power implies that the agent is applying force in the direction of velocity which contributes to the motion significantly. ...
Preprint
Physical Human-Human Interaction (pHHI) involves the use of multiple sensory modalities. Studies of communication through spoken utterances and gestures are well established. Nevertheless, communication through force signals is not well understood. In this paper, we focus on investigating the mechanisms employed by humans during the negotiation through force signals, which is an integral part of successful collaboration. Our objective is to use the insights to inform the design of controllers for robot assistants. Specifically, we want to enable robots to take the lead in collaboration. To achieve this goal, we conducted a study to observe how humans behave during collaborative manipulation tasks. During our preliminary data analysis, we discovered several new features that help us better understand how the interaction progresses. From these features, we identified distinct patterns in the data that indicate when a participant is expressing their intent. Our study provides valuable insight into how humans collaborate physically, which can help us design robots that behave more like humans in such scenarios.
... However, it is yet unclear what characteristics to instill in a robot to make it more humanlike in overground pHRI. In physical human-human interaction (pHHI), humans do physical interaction solely through haptic-based information exchange even without any verbal communication [7][8][9][10]. Whether the contact is directly between humans, such as while dancing or assisting the elderly, or is through an object such as while moving a piece of furniture together, pHHI studies [7,[9][10][11][12][13] suggest that the motor intent is communicated in the form of coupled forces and movements. ...
Article
Full-text available
Many anticipated physical human-robot interaction (pHRI) applications in the near future are overground tasks such as walking assistance. For investigating the biomechanics of human movement during pHRI, this work presents Ophrie, a novel interactive robot dedicated for physical interaction tasks with a human in overground settings. Unique design requirements for pHRI were considered in implementing the one-arm mobile robot, such as the low output impedance and the ability to apply small interaction forces. The robot can measure the human arm stiffness, an important physical quantity that can reveal human biomechanics during overground pHRI, while the human walks alongside the robot. This robot is anticipated to enable novel pHRI experiments and advance our understanding of intuitive and effective overground pHRI.
... In a previous study, Macdorman et al. [22] mentioned that robots with anthropomorphic appearances and facial expressions have strong comprehensive capabilities and can adapt to human inertial thinking. Humans can understand corresponding states from the nonverbal behaviors of humanoid robots, similar to interacting with other human beings [23], [24]. Therefore, generating natural facial expressions and head motions for humanoid robots is essential for effective humanrobot interactions [25], and the most direct and simple method for natural action generation is to make the robot imitate the facial expressions and head motions of human beings. ...
Article
Full-text available
The ability of a humanoid robot to imitate facial expressions with simultaneous head motions is crucial to natural human-robot interaction. This mirrored behavior from human beings to humanoid robots has high demands of similarity and real-time performance. To fulfill these needs, this paper proposes a real-time robotic mirrored behavior of facial expressions and head motions based on lightweight networks. First, a humanoid robot that can change the state of its facial organs and neck through servo displacement is developed to achieve the mirrored behavior of facial expressions and head motions. Second, to overcome the high latency caused by deep learning models running in embedded devices, a lightweight deep learning network is constructed for detecting facial feature points, which can reduce model size and improve running speed without affecting the performance of the model. Finally, a mapping relationship of 68 facial feature points to optimal servo displacements is established to realize the mirrored behavior from human beings to humanoid robots. The experimental results show that the facial feature point recognition method based on lightweight model performs better than other state-of-the-art methods, and our head motion tracking method can maintain high accuracy compared with the gold standard optical motion capture system NOKOV. Overall, our method ensures the accurate and real-time generation of robot mirrored behavior and has a certain reference value for the efficient and natural interaction between humans and robots.
... One factor could be the number of effectors involved in the task. Dyads tend to outperform solos if the task can be performed unimanually by two individuals 7,8,13,14 . This can be explained by the biomechanical advantages associated with the involvement of more effectors during dyadic cooperation, i.e., two arms from two agents versus one arm from one agent. ...
Preprint
Full-text available
Collaboration frequently yields better results in decision making, learning, and haptic interactions than when these actions are performed individually. However, is collaboration always superior to solo actions, or do its benefits depend on whether collaborating individuals have different or the same roles? To answer this question, we asked human subjects to perform virtual-reality collaborative and individual beam transportation tasks. These tasks were simulated in real-time by coupling the motion of a pair of hand-held robotic manipulanda to the virtual beam using virtual spring-dampers. For the task to be considered successful, participants had to complete it within temporal and spatial constraints. While the visual feedback remained the same, the underlying dynamics of the beam were altered to create two distinctive task contexts which were determined by a moving pivot constraint. When the pivot was placed at the center of the beam, two hands contribute to the task with symmetric mechanical leverage (symmetric context). When the pivot was placed at the left side of the beam, two hands contribute to the task with asymmetric mechanical leverage (asymmetric context). Participants performed these task contexts either individually with both hands (solo), or collaboratively by pairing one hand with another one (dyads). We found that dyads in the asymmetric context performed better than solos. In contrast, solos performed the symmetric context better than dyads. Importantly, we found that two hands took different roles in the asymmetric context for both solos and dyads. In contrast, the contribution from each hand was statistically indistinguishable in the symmetric context. Our findings suggest that better performance in dyads than solos is not a general phenomenon, but rather that collaboration yields better performance only when role specialization emerges in dyadic interactions.
Article
Full-text available
Humans can physically interact with other humans adeptly. Some overground interaction tasks, such as guiding a partner across a room, occur without visual and verbal communication, which suggests that the information exchanges occur through sensing movements and forces. To understand the process of motor communication during overground physical interaction, we hypothesized that humans modulate the mechanical properties of their arms for increased awareness and sensitivity to ongoing interaction. For this, we used an overground interactive robot to guide a human partner across one of three randomly chosen paths while occasionally providing force perturbations to measure the arm stiffness. We observed that the arm stiffness was lower at instants when the robot’s upcoming trajectory was unknown compared to instants when it was predicable - the first evidence of arm stiffness modulation for better motor communication during overground physical interaction.
Conference Paper
Dyads are couples of collaborative humans that perform a task together while mechanically connected by a robot. As shown in different studies [1] [2], haptic interaction can be beneficial for motor performance so that the dyad outperforms the subject executing the task alone. These achievements are hypothesized to be the result of the haptic communication engaged between the subjects that triggers internal forward models. In this way the dyad's components can attain additional information about the task, hence improving their performance. Here we show a novel dual robotic system, called Pantograph, used in a pilot study to understand the influence that the nature of the partner has on the learning process. The main hypothesis that we claim is that a Novice-Novice type of interaction is more beneficial, in terms of speed of learning, with respect to an Expert-Novice type of interaction. The results show time constants equal to 5.53 ± 2.79 and 8.45 ± 3.78 for the Novice-Novice and Expert-Novice group, respectively. However, the p-value obtained was p = 7.54%. Hence, we can not generalize our results, but this research study shows how haptic communication between interacting humans allows for motor learning and how the nature of the subjects could be an important factor of the learning process.
Article
Full-text available
We propose that the crucial difference between human cognition and that of other species is the ability to participate with others in collaborative activities with shared goals and intentions: shared intentionality. Participation in such activities requires not only especially powerful forms of intention reading and cultural learning, but also a unique motivation to share psychological states with others and unique forms of cognitive representation for doing so. The result of participating in these activities is species-unique forms of cultural cognition and evolution, enabling everything from the creation and use of linguistic symbols to the construction of social norms and individual beliefs to the establishment of social institutions. In support of this proposal we argue and present evidence that great apes (and some children with autism) understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality). Human children's skills of shared intentionality develop gradually during the first 14 months of life as two ontogenetic pathways intertwine: (1) the general ape line of understanding others as animate, goal-directed, and intentional agents; and (2) a species-unique motivation to share emotions, experience, and activities with other persons. The developmental outcome is children's ability to construct dialogic cognitive representations, which enable them to participate in earnest in the collectivity that is human cognition.
Conference Paper
Full-text available
Often two people must work together physically on a common task, such as lifting and positioning; a long board, or, in our model experimental system, turning a two-handled crank. Such tasks involve communication between the people, mediated by the task kinematics and dynamics: each person feels forces and motions produced by the other and derive some meaning from them. Tasks may include a degree of competition: the two people may not have exactly the same goal in mind, and must negotiate a compromise. Understanding human-human communication is important in designing robots for interaction with humans, and for robots that provide powered assistance for human-human tasks (such as physical therapy). In this paper we describe early experiments in human-human physical interaction, with a 1 dof robot included in order to give experimental access to the exchange of forces and motions between the people. We report on Fitts' law-like tasks, in which the two people cooperate to move a cursor to a common target, or to targets that do not completely overlap. Our results suggest that human-human physical communication may be a rich area of study.
Conference Paper
Full-text available
Machines might physically interact with humans more smoothly if we better understood the subtlety of human- human physical interaction. We recently reported that two people working cooperatively on a physical task will quickly negotiate an emergent strategy: typically subjects formed a temporal specialization such that one member commands the early parts of motion and the other the late parts (1). In our current study, we replaced one of the humans with a robot programmed to perform one of the typical human specialized roles. We expected the remaining human to adopt the complementary specialized role. Subjects did believe that they were interacting with another human but did not adopt a specialized behavior as subjects would when physically working with another human; our negative result suggests a very subtle negotiation takes place in human-human physical interaction. I. INTRODUCTION Understanding human-human physical interaction should lead to an increased understanding of how a robot and a human can intuitively and cooperatively work together physically. When two people work together, they are using their collective force and past knowledge to affect each other and their environment. Sebanz et al. (2) state, in a review of joint action, that it may not be possible to fully understand how humans operate by studying people working in isolation. Understanding how two people work together may elucidate how an individual works alone and how a person would intuitively work with another agent, such as a robot. Klingsport et al. (3) suggest that human- robot communication in a shared task should follow the implicit human-human communication standards. Learning to implement strategies for how two humans communicate haptically could help in creating more intuitive communi- cation between a robot and a human, which is important as robots are becoming more commonplace and interacting with humans more frequently.
Article
The construction of computer systems that are intelligent, collaborative problem-solving partners is an important goal for both the science of AI and its application. From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. In this address, it is argued that collaboration must be designed into systems from the start; it cannot be patched on. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor - within AI, across subfields of computer science, and with researchers in other fields. Copyright © 1996, American Association for Artificial Intelligence. All rights reserved.
Article
Does the mere presence of another person produce social facilitation? This classic question was addressed once again, using a procedure similar to that of H. Markus (1978, Journal of Experimental Social Psychology, 14, 389–397), but modified to remove some critical ambiguities of interpretation. Forty-five college undergraduates worked alone, in the presence of an individual wearing a blindfold and earphones, or in the presence of an evaluative experimenter. Prior to realizing that the experiment had actually begun, each subject performed both a simple task (typing his or her name) and a difficult task (typing the name backward with ascending numbers interspersed between the letters). Compared to the alone condition, performance times in both audience conditions were faster for the simple task and slower for the difficult task—the criterial pattern for social facilitation. This study, then, provides the clearest evidence to date that mere presence is dufficient to produce social facilitation.
Thesis
An understanding of how two people anticipate, adapt, and react to each other's forces and motions could aid in designing machines to work cooperatively with humans and further explain how a single human interacts with the world. Tasks, such as lifting and moving a bulky ob ject, teaching manual skills, dancing, and handing off a baton or a drinking glass, involve haptic interaction, which is a communication channel distinct from spoken language and gestures, but much less studied. Throughout this thesis, I will discuss my experiments on physical interaction and negotiation between two agents working on a target acquisition task. First, I will evaluate human-human physical interaction. I will show that dyads are faster than individuals, despite applying larger forces. By analyzing the interaction forces through a jointly controlled object, I will reveal a distinctly different completion strategy for dyads that is not available for individuals. Second, I will look at human-robot interaction. By simulating human-human interaction, a robot can surreptitiously replace one of the human partners leading to improved disturbance rejection.
Article
Haptic interaction in human motor tasks requires special modelling techniques in robot learning. This paper proposes a novel scheme for haptic skill modelling, covering the modelling procedure starting from reference data acquisition until a competent model is obtained. The iterative structure of the proposed scheme provides a clear guideline to the modelling workflow. The scheme as well as the distinct features of haptic skill modelling are discussed and illustrated by modelling social handshake dynamics.
Article
We investigated how people control fast, accurate movements of a load using a simple two-hand grasp. By providing a clear instruction to several subjects, we isolated a single control strategy. The kinematics produced by this control strategy are nearly indistinguishable from those produced during single-hand movements, but the torques are quite different: one hand accelerates not only itself, but also the load and the other hand, while the other hand brakes the hand-load-hand system. As a result, the hands squeeze the load with a large force during the movement. The dynamics of the hand-load-hand system are of the same form as the dynamics of a single-hand system. Apparently, by taking advantage of this dynamic similarity and of the spring-like properties of muscle, the human motor control system can control the two-hand grasp system simply by modifying the muscle activation patterns used to control single-hand movements. The task dynamics of two-hand grasp do not require that the load be squeezed during the movement, and squeezing the load wastes torque that could be used to move more quickly. However, the human motor control system may choose this squeezing strategy because it reliably brakes the hand-load-hand system despite inherent variability in the braking of individual hands.