ArticlePDF Available

Emotional Expression Humanoid Robot WE4RII Evaluation of the perception of facial emotional expressions by using fMRI

Authors:

Abstract and Figures

Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our elderly-dominated society, with an active participation to joint works and community life with humans. In order to achieve this smooth and natural integration between humans and robots, interaction also at emotional level is a fundamental required. Objective of this research, therefore, is to clarify how the emotions expressed by a humanoid robot are perceived by humans. The preliminary results show several similarities but also several differences in perception.
Content may be subject to copyright.
Emotional Expression Humanoid Robot WE-4RII
-Evaluation of the perception of facial emotional expressions by
using fMRI-
M. Zecca1,2, T. Chaminade3, M.A. Umiltà4, K. Itoh2,5, M. Saito6, N. Endo6,
Y. Mizoguchi6, S. Blakemore3, C. Frith3, V. Gallese4, G. Rizzolatti4, S. Micera7,
P. Dario2,7, H. Takanobu2,8,9, A. Takanishi1,2,5,9,10
1. Consolidated Research Institute for Advanced Science and Medical Care, Waseda University, Tokyo,
Japan, email: zecca@aoni.waseda.jp; takanisi@waseda.jp
2. RoboCasa, Waseda University, Tokyo, Japan
3. Wellcome department of imaging neuroscience University College of London
4. Dipartimento di Neuroscienze, Sezione di Fisiologia, Università di Parma
5. Department of Mechanical Engineering, Waseda University, Tokyo, Japan
6. Graduate School of Science and Engineering, Waseda University, Tokyo, Japan
7. ARTS Lab, Scuola Superiore Sant’Anna, Pisa, Italy
8. Department of Mechanical Systems Engineering, Kogakuin University, Tokyo, Japan
9. Humanoid Robotics Institute (HRI), Waseda University, Tokyo, Japan
10. Advanced Research Institute for Science and Engineering, Waseda University, Tokyo, Japan
Personal robots and robot technology (RT)-based assistive devices are expected to play a major role in our
elderly-dominated society, with an active participation to joint works and community life with humans. In order to
achieve this smooth and natural integration between humans and robots, interaction also at emotional level is a
fundamental required.
Objective of this research, therefore, is to clarify how the emotions expressed by a humanoid robot are
perceived by humans. The preliminary results show several similarities but also several differences in perception.
Key Words: facial emotional expression, humanoid robot, fMRI
1 Introduction
Japan has the world's highest percentage of senior citizens
over 65 (21%) and the smallest percentage of children under
15 (13.6%) [1]. These figures show that Japanese society is
aging much faster than expected, and they underscore the
effects of a shrinking birthrate [2]. In this aging society, it is
expected that there will be a growing need for home, medical
and nursing care services, including those provided by robots,
to assist the elderly both on the physical and the
psychological levels [3]. In this regard, human-robot
communication and interaction are very important,
particularly in the case of home and personal assistance for
elderly and/or handicapped people. If a robot had a "mind"
(intelligence, emotion, and will) similar to the human one, it
would be much easier for the robot to achieve smooth and
natural adaptation and interaction with its human partners
and the environment [4].
Takanishi et al have been developing developed the WE-3
(Waseda Eye No.3) series since 1995. So far they have
achieved coordinated head-eye motion with V.O.R.
(Vestibular-Ocular Reflex), depth perception using the angle
of convergence between the two eyes, adjustment to the
brightness of an object with the eyelids and four senses,
visual, auditory, cutaneous and olfactory sensations. In
addition, they obtained the expression of emotions by using
not only the face, but also the upper-half of the body with the
Emotion Expression Humanoid Robot WE-4 (Waseda Eye
No.4) series with the waist, 9-DOFs emotion expression
humanoids arms and humanoid robot hands RCH-1 (Robo
Casa Hand No.1) [5-7].
WE-4RII transmission of emotions was evaluated by
showing the movies of its six basic emotional expressions
exhibited to many subjects. The users chose the emotion they
thought the robot expressed. The averaged recognition rate of
all emotional expressions of WE-4RII was 93.5 [%], which
proved that WE-4RII can effectively convey its emotions
using its upper-half bodily expressions [7].
However, this kind of analysis lacks of objectivity. In order
to obtain more objective data about the user perception of the
emotions, a different approach should be pursued.
The mirror neuron system [8] is an area of our brain whose
neurons fire both when we perform an action and when we
observe the same action performed by someone else. The
function of the mirror system is a subject of much
speculation. These neurons may be important for
understanding the actions of other people, and for learning
new skills by imitation. It is also considered that the Mirror
Neuron System plays an important role in the recognition of
emotions.
Objective of this research, therefore, is to clarify how the
emotions expressed by a humanoid robot are perceived by
humans. 2 Material and Methods
2.1 Emotion Expression Humanoid Robot WE-4RII
The Emotion Expression Humanoid Robot WE-4RII (see Fig.
1) developed in Takanishi lab is capable of expressing 6
different emotions (Happyness, Anger, Surprise, Sadness,
Disgust, Fear) by using facial expressions and movements of
the neck, the arms and the hands [7].
Fig. 1 The Emotion Expression Humanoid Robot WE-4RII.
In Fig 2 the details about the mechanisms used to obtain
different facial expressions are presented. Eyebrows are
realized by sponge. Each eyebrow is actuated by 4 DC
motors connected by clear wire. Lips are obtained by 2
spinlde shaped springs. Their movement is realized by 4 DC
motors. Eyelids have 6 DOFs.
Fig. 2 Details of the mechanisms for facial expressions.
2.2 Experimental Paradigm
The experimental paradigm contained 16 conditions defined
by a 2x2x4 factorial design with factors:
2 Agents: human or humanoid robot WE-4RII;
2 Identities of each agent: bald / hairy version;
4 facial motions depicted by the agent: silent speech
(articulating a syllable e.g. "ba ba") or emotion
(happiness, anger or disgust).
Silent speech was selected because it is supposed to activate
the motor areas while not activating the emotional area of the
Mirror Neuron System.
Each stimulus consisted of 1.5 seconds greyscale videoclips
(i.e. 38 frames at 25 frames per second). One example for
happyness for one human actor and for the robot with the
wig is presented in Figure 3.
Fig.3 Example of Happyness for one human actor (top) and for the
robot WE-4RII with the wig (bottom)
Two different actors were recorded to prepare the human
stimuli while two versions of the robotic face were prepared
by the addition of a wig (Figure 4).
Fig. 4 2x2 Factorial Plan
Four different versions of each type of stimuli were used (see
fig 5 for happiness). All stimuli started from a neutral pose
and stopped with the emotional expression. Great care was
taken to match the dynamics of the human and robot stimuli
pairwise as much as possible, to minimize the false responses.
The greyscale was digitally modified to match the
background colour and the overall contrast between the
human and robot stimuli. The overall luminosity of the clips
was reduced to avoid too much visual fatigue to the subjects
under fMRI.
Fig. 5: four different samples for Happiness.
2.3 Behavioral Analysis
Participants were asked to recognize the emotional content of
the stimuli. After presentation of emotional stimuli, subjects
had to chose between 4 emotions in a forced choice paradigm
(Angry, Happy, Disgust, Neutral). There were 8 blocks of 8
stimuli for each participant reported here, and each stimulus
was shown once. All subjects experienced 1 to 4
experimental sessions before the one used for the present
analysis, ensuring they all experienced the robot and the
stimuli at the time of acquisition of the behavioral data
reported here.
2.4 fMRI acquisition
The fMRI acquisition consisted of 4 sessions, each one
composed by 8 blocks (4/emotional, 4/neutral tasks), with 8
stimuli per block (1 of each type).
Each presentation was followed by 1.5 seconds response
screen, with Stimulus-onset asynchrony (SOA) jittered
(normal distribution, 6±0.7 seconds), divided before and after
stimulus.
Reminder of task, continuous Likert scale [-200 200] with
target emotion and “None” for emotional task, “Lots” and
“None” for neutral task, side random
Standard SPM2 analysis with specific EPI sequence &
unwarp with fieldmaps was used.
fMRI of the whole brain, with a sequence optimized for
amygdala, orbitofrontal and ventrotemporal cortex,
composed by 48 slices, 3x3x3mm3, TR=4.32 secs, @1.5T
was used. Each subject worked for 1 hour.
Two different types of question were asked to the subjects:
1. Emotional: “How emotional was the face?” Rating from
“Neutral” to the target emotion (i.e. “Happy”);
2. Neutral: “How much movement did the face show?”
From “None” to “Lots”
3 Experimental Results
3.1 Behavioral Analysis
10 subjects participated after giving their informed consent,
independently of or in addition to the fMRI experiment. One
subject did not report any stimulus as "Neutral", and was
removed from the analysis, so that n=9 for the results
reported here.
Analysis of variance (factors of interest: Emotion, Agent;
random factors: version of the agent [bald vs hairy], subjects)
indicates a significant main effect of the agent used to
display the emotion on the ratio of correct answers (number
correct divided by total number for each condition), but no
significant effect of the emotion on the recognition or
interaction between the emotion and the agent.
Results of the behavioral analysis are presented in Table 1
and Figure 6. The difference between human and robot
agents is highly significant (t=4.512, p<0.001), with
emotions being better recognized for the human (98%) than
for the robot (85%) agents.
Table 1. Recognition ratio depending for each agend and emotion.
Fig 6. Recognition ratio depending on each agent and emotion.
3.2 fMRI analysis – behavioral analysys
13 subjects (Male: 4; Femail:9), all right handed, Average
age: 29.4 ±7, range 22.4 – 39.7, gave their informed
consent to the participation to this experiment.
One-way ANOVAs restricted to each emotion were used to
assess differences in ratings due to the agent used to depict
the emotion. Only emotional rating of the Angry stimulus
was significantly different (p<0.001).
Human more emotional and perceived as moving more than
robot across all conditions. It could be either subjective or
due to stimuli not being perfectly matched, though.
Only emotional ratings of Angry stimuli are significantly
affected by the agent (see fig. 7 and 8 for reference).
Therefore, angry stimuli will be excluded from the fMRI
analysis.
Fig 7. Emotion ratings (error bar: SD) depending to agent and
emotion. Number on top gives the effect of agent according to
ANOVA for each emotion and rating.
Fig. 8: Motion ratings (error bar: SD) depending to agent and
emotion. Number on top gives the effect of agent according to
ANOVA for each emotion and rating.
3.3 fMRI analysis – further analysys
At the time of the compilation of this paper, further analysis
of the data is still in progress. The results will be published in
a following paper.
4 Discussion and Conclusion
Objective of this work was to clarify how the emotions
expressed by a humanoid robot are perceived by humans. To
do this, we analyzed the response of the Mirror Neuron
System of several subjects looking at videos of the robot and
of human actors, and we compared the responses.
A pure behavioral analysis showed that the difference
between human and robot agents is highly significant, with
emotions being better recognized for the human (98%) than
for the robot (85%) agents (see Fig 6 and Table 1).
The analysis of the fMRI data is still preliminary. However,
some simple conclusion can be drawn. In a one-way
ANOVAs restricted to each emotion, only emotional rating
of the Angry stimulus was significantly different (p<0.001).
In the future, the analysis will be extended to different goups
of subjects, in order to clarify the dipendencies with age, sex,
and cultural backgound. The analysis will be also extended to
full body emotional expressions, which is believed to be even
more important than facial expressions [9] (i.e.. while a
fearful faces signal a threat, it does not provide information
about either the source of the threat or the best way to deal
with it. By contrast, fearful body positions signal a threat and
at the same time specify the action undertaken by the
individuals fearing for their safety).
ACKNOWLEDGMENT
Part of this research was conducted at the Humanoid Robotics
Institute (HRI), Waseda University. The authors would like to
express thanks to Okino Industries LTD, OSADA ELECTRIC CO.,
LTD, SHARP CORPORATION, Sony Corporation, Tomy Company
LTD and ZMP INC. for their financial support for HRI. And, the
authors would like to thank Italian Ministry of Foreign Affairs,
General Directorate for Cultural Promotion and Cooperation, for its
support to the establishment of the ROBOCASA laboratory. In
addition, this research was supported by a Grant-in-Aid for the
WABOT-HOUSE Project by Gifu Prefecture. Part of the research
has been supported by the EU FET NEUROBOTICS
FP6-IST-001917 “The fusion of Neuroscience and Robotics”.
Finally, the authors would also like to express thanks to ARTS Lab,
NTT Docomo, SolidWorks Corp., Consolidated Research Institute
for Advanced Science and Medical Care, Waseda University,
Advanced Research Institute for Science and Engineering, Waseda
University, Prof. Yutaka Kimura, Dr. Yuichiro Nagano and Dr.
Naoko Yoshida for their support for our research.
References
[1] Ministry of Internal Affairs and Communications of Japan, Preliminary
Report on the 2005 National Census, June 2006.
[2] NIPS. Population statistics of Japan 2003. Technical report, National
Institute of Population and Social Security Research, Japan, 2003
[3] JApan Robot Association, Summary Report on Technology Strategy
for Creating a Robot Society in the 21st Century, May 2001.
[4] S. Hashimoto, “KANSEI as the Third Target of Information Processing
and Related Topics in Japan.” Proceedings of the International
Workshop on KANSEI: The Technology of Emotion, pp.101-104.,
1997.
[5] H. Miwa, K. Itoh, et al.: “Design and Control of 9-DOFs Emotion
Expression Humanoid Arm”, Proceedings of the 2004 IEEE
International Conference on Robotics and Automation, pp.128-133,
2004.
[6] H. Miwa, K. Itoh, et al.: “Effective Emotional Expressions with
Emotion Expression Humanoid Robot WE-4RII”, Proceedings of the
2004 IEEE/RSJ International Conference on Intelligent Robots and
Systems, pp.2203-2208, 2004.
[7] M. Zecca, A. Takanishi, et al, “On the development of the Emotion
Expression Humanoid Robot WE-4RII with RCH-1”, HUMANOIDS
2004, v1, Page(s):235 - 252.
[8] G. Rizzolatti, L. Craighero, “The Mirror-Neuron System”,Annual
Review of Neuroscience, Vol. 27: 169-192, July 2004
[9] B. De Gelder, “Towards the neurobiology of emotional body language”,
Nature, v7:2006, pp 242-249
... Both visual and auditory informations are used for mimicking human expressions as a mean of developing social and communication skills. Among social robots, the robotic head (e.g., Kismet[9], Saya [10] or WE-4RII [21]) mainly imitates facial expressions and corporal language, through the modification of the poses of different mechanical elements, such as eyes and mouth. The imitation of corporal language not only depends on an accurate estimate of the user's pose, but also on tracking its motion. ...
... In these works, authors achieve a natural HRI through the generation of facial expressions by the robot with the goal of maintaining a level of empathy and emotional attachment to robots [3]. These facial expression and emotion generation methods differ in the amount of facial expression that is possible to generate by the robot due to physical constraint [21]. In robotic heads with human-like characteristics such as the robotic head used in this paper, different works provide solutions for emotion generation depending on their physical constraints [9]. ...
... Finally, robot's capability of imitating facial expressions and movements determines the design of the heads used in social robotics. Usually, imitation of facial expressions is achieved through mobile elements of the head (e.g., eyelids, eyebrows, eyes or mouth) [6], [21]. ...
Article
Full-text available
This paper presents a new system for recognition and imitation of a set of facial expressions using the visual information acquired by the robot. Besides, the proposed system detects and imitates the interlocutor's head pose and motion. The approach described in this paper is used for human-robot interaction (HRI), and it consists of two consecutive stages: i) a visual analysis of the human facial expression in order to estimate interlocutor's emotional state (i.e., happiness, sadness, anger, fear, neutral) using a Bayesian approach, which is achieved in real time; and ii) an estimate of the user's head pose and motion. This information updates the knowledge of the robot about the people in its field of view, and thus, allows the robot to use it for future actions and interactions. In this paper, both human facial expression and head motion are imitated by Muecas, a 12 degree of freedom (DOF) robotic head. This paper also introduces the concept of human and robot facial expression models, which are included inside of a new cognitive module that builds and updates selective representations of the robot and the agents in its environment for enhancing future HRI. Experimental results show the quality of the detection and imitation using different scenarios with Muecas.
... Into social robots, the robotic heads (e.g. Kismet[9], Saya [10] or WE-4RII [18]) mainly imitates facial expressions, through the modification of the positions of the mobile elements as the eyes or mouth. The proposed approach presents a facial expression recognition system that allows to detect and recognize four different emotions (happiness, sadness, anger and fear) besides of the neutral state. ...
... In these works, authors achieve a natural HRI through the generation of facial expressions by the robot with the goal of maintaining a level of empathy and emotional attachment to robots [3]. These facial expression and emotion generation methods differs in the amount of facial expression that are possible to generate by the robot due to physical constraint [18]. In robotic heads with human-like characteristics such as the robotic head used in this paper, different works provide solutions for emotion generation depending on their physical constraints [9]. ...
... Usually, imitation of facial expressions is achieved through mobile elements of the head (e.g. eyelids, eyebrows, eyes or mouth) [6], [18]. ...
... as human shaped robots, decreases the gap between robot and human, during social interactions. Such user-friendly designs were extensive explored on the development of platforms for affective HRI (e.g., Kismet[4], Saya[5]or WE-4RII[6]robotics heads). In this respect, facial expression recognition and mimicry play an important role in human interaction and nonverbal communication. ...
... In these works, authors achieve a natural HRI through the generation of facial expressions by the robot with the goal of maintaining a level of empathy and emotional attachment to robots[3]. These facial expression and emotion generation methods differs in the amount of facial expression that are possible to generate by the robot due to physical constraint[6]. In robotic heads with human-like characteristics such as the robotic head used in this paper, different works provide solutions for emotion generation depending on their physical constraints[23],[4]. ...
Conference Paper
Full-text available
Facial expressions are rich sources of commu-nicative information about human behavior and emotion. The robot's abilities to recognize and imitate emotions are powerful signals within Human Robot Interaction. This paper presents a real time system for recognizing and imitate facial expressions in the context of affective Human Robot Interaction. The proposed approach achieves a fast and robust facial feature extraction based on consecutively applying filters to the edge image. An efficient Gabor filter is used in this paper, joint to a set of morphological and convolutional filters to reduce the noisy and the light dependence of the image acquired by the robot. Then, a set of invariant edge-based features are extracted and used as input in a Dynamic Bayesian Network. The proposed system estimates the human emotion by recognizing different facial expressions using this Bayesian approach. The output of this classifier updates a geometric robotic head model, which is used as a bridge between the human expressiveness and the final robotic head. In this paper, the human facial expressions are successfully imitated by Muecas, a 12 degrees of freedom robotics head. Experimental results demonstrate the accuracy and robustness of the proposed approach compared to similar facial recognition systems.
... Other works, in contrast, have a lot of degrees of freedom, using a set of mobile elements to generate and imitate the movements and the facial expressions (e.g., robotic mouth, eyes, neck or eyebrows). Some examples of these robotic heads are the robots, Kismet [25], ROMAN [26,27], KHH [28], WE-4RII [29] or SAYA [30,31]. Similar to these robots, Muecas is equipped with a set of mechanical elements that allows the performing of a broad range of natural movements and emotions. ...
Article
Full-text available
This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions.
... Asimo has the ability to recognize faces, even when Asimo or the human is moving. Individually Asimo can recognize about 10 different faces and recognize facial owner's name if already registered previously [3]. To support the development of facial recognition technology to the Asimo (humanoid robot), then one of them is using facial expressions as communication which is owned by humans. ...
Article
Full-text available
Leave 2 blank lines before beginning main text — page 1] Abstract Verbal communication is communication that uses the language or voice interaction, whereas nonverbal communication is communication that interacts with one of these gestures is to show facial expressions. We propose of the implementation of controls based on the facial expressions of human face. Based on obtained the information expression then translated into a device simulator using the OpenGL graphics software as an indication tool and easy for analyzing the emotional character of a person through a computer device. In the implementation of the face was found that the mechanism of the humanoid robot head in nonverbal interaction has 8 DOF (Degree of Freedom) from the various combinations between the drive motor servo eyebrows, eyes, eyelids, and mouth in order to show the anger, disgust, happiness, surprise, sadness, and fear facial expression.
... It was shown in a separate experiment [12], and confirmed in preliminary tests with the stimuli used in the present experiment [43], that the robot depictions of the three emotions used in this experiment (Anger, Joy and Disgust) were correctly recognized above chance levels (all .75% correct recognition). ...
Article
Full-text available
The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.
Conference Paper
As rehabilitation robots are increasingly serving to improve the quality of life for physically disabled people in clinical environments, the concept of emotional expressiveness in robots becomes increasingly important. The human perception of robot's emotional expressions plays a crucial role in human robot interaction. Virtual expression systems outperform hardware systems in realizing human like expressions due the limitations in hardware actuators and advances in animation tools. This paper evaluates the human perception of robot's emotional expressions with two different virtual emotional expression systems: one where the robot exhibits emotions through icon faced expressions and in other the robot exhibits emotions through human faced expressions in clinical environment. A rehabilitation HRI research robot Robbie is introduced and the results of the comparative study on human perception of robotpsilas emotional expressions with two different systems are discussed.
Article
Full-text available
A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others' actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism--the mirror-neuron mechanism--that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language.
Conference Paper
Full-text available
Among social infrastructure technologies, robot technology (RT) is expected to play an important role in solving the problems of both decrease of birth rate and increase of elderly people in the 21st century, specially (but not only) in Japan where the average age of the population is rising faster than any other nation in the world. In order to achieve this objective, the new generation of personal robots should be capable of a natural communication with humans by expressing human-like emotion. In this sense, human hands play a fundamental role in exploration, communication and interaction with objects and other persons. This paper presents the recent results of the collaboration between the Takanishi Lab of Waseda University, Tokyo, Japan, the Arts Lab of Scuola Superiore Sant'Anna, Pisa, Italy, and ROBOCASA, Tokyo, Japan. At first, the integration of the artificial hand RTR-2 of ARTS lab with the humanoid robotic platform WE-4R during ROBODEX2003 is presented. Then, the paper show the preliminary results of the development of a novel anthropomorphic hand for humanoid robotics RCH-I (ROBOCASA Hand No.1) and its integration into a new humanoid robotic platform, named WE-4RII (Waseda Eye No.4 Refined II).
Conference Paper
Full-text available
The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.
Conference Paper
The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. We considered that human arms play an important role. And, in 2003, we developed the 9-DOFs emotion expression humanoid arm. Then, we developed the emotion expression humanoid robot WE-4R (Waseda Eye No.4 Refined) by integrating the new arms into the human-like head robot WE-4. Moreover, we introduced the new control algorithm for robot arm with redundant DOF into WE-4R. The new control algorithm was described by the junction of geometric constraint and robot emotion. We describe the mechanical design and the control algorithm of 9-DOFs emotion expression humanoid arm.
Article
People's faces show fear in many different circumstances. However, when people are terrified, as well as showing emotion, they run for cover. When we see a bodily expression of emotion, we immediately know what specific action is associated with a particular emotion, leaving little need for interpretation of the signal, as is the case for facial expressions. Research on emotional body language is rapidly emerging as a new field in cognitive and affective neuroscience. This article reviews how whole-body signals are automatically perceived and understood, and their role in emotional communication and decision-making.
Summary Report on Technology Strategy for Creating a Robot Society in the 21st Century
JApan Robot Association, Summary Report on Technology Strategy for Creating a Robot Society in the 21st Century, May 2001.
Towards the neurobiology of emotional body language
  • B De Gelder
B. De Gelder, "Towards the neurobiology of emotional body language", Nature, v7:2006, pp 242-249