Fig 3 - uploaded by Kentaro Takemura
Content may be subject to copyright.
Motion capture setup (left) and marker layout (right) used in the motion capture experiments.

Motion capture setup (left) and marker layout (right) used in the motion capture experiments.

Context in source publication

Context 1
... passive reflective markers are attached to the android's face which serves as the feature points (shown in Fig.3). The markers are placed on the face where significant feature point movement occurs when the actuators are displaced. ...

Citations

... The android's facial expression is controlled using 11 actuators. For the detailed explanation about the android and the experimental setup please refer to [5]. The android's feature point positions are captured using a motion capture system for gathering training data for training the forward kinematics ANN in section 2.1 (see Fig. 1). ...
Conference Paper
Full-text available
The ability of androids to display facial expressions is a key factor towards more natural human-robot interaction. However, controlling the facial expressions of such robots with elastic facial skin is difficult due to the complexity of model-ing the skin deformation. We propose a method to solve the inverse kinematics of android faces to control the android's facial expression using target feature points. In our method, we use an artificial neural network to model the forward kinematics and minimizing a weighted squared error function for solving the inverse kinemat-ics. We then implement an inverse kinematics solver and evaluate our method using an actual android.