Figure 6 - uploaded by Sung Park
Content may be subject to copyright.
(a) Negative and (b) positive expressions of virtual agent The virtual agent expressed emotion through three means: facial expression, behavioral gestures, and voice. The agent expressed two emotions, positive and negative, on the valence plane based on the dimensional affect by Russell [46]. The corresponding facial expression was designed based on the

(a) Negative and (b) positive expressions of virtual agent The virtual agent expressed emotion through three means: facial expression, behavioral gestures, and voice. The agent expressed two emotions, positive and negative, on the valence plane based on the dimensional affect by Russell [46]. The corresponding facial expression was designed based on the

Source publication
Article
Full-text available
Artificial entities, such as virtual agents, have become more pervasive. Their long-term presence among humans requires the virtual agent's ability to express appropriate emotions to elicit the necessary empathy from the users. Affective empathy involves behavioral mimicry, a synchronized co-movement between dyadic pairs. However, the characteristi...

Contexts in source publication

Context 1
... virtual agent was a three-dimensional female character that was refined for the experiment (see Fig. 6). We used the animation software Maya 2018 (Autodesk) to modify an open-source, FBX (Filmbox) formatted virtual agent model. Specifically, we adjusted the number and location of the cheekbone and chin to express the facial expressions according to the experiment design (i.e., negative and positive emotion expressions). We used the ...
Context 2
... experiment program using C# 4.5, an objective-oriented programming language. [62]. The behavioral gestures were designed based on previous studies on the perceived intentions and emotions of gestures [63][64][65]. For example, in the negative emotion condition, the palms were facing inward, and the arms were bent, concealing the chest (see (a) in Fig. 6). In the positive emotion condition, the virtual agent had the palms facing upward with the arm and chest opened (see (b) in Fig. 6). We used a voice recording of a female in her 20s, congruent with the appearance of the virtual agent. To make the expression as natural and believable as possible, we guided the voice actor to speak as ...
Context 3
... studies on the perceived intentions and emotions of gestures [63][64][65]. For example, in the negative emotion condition, the palms were facing inward, and the arms were bent, concealing the chest (see (a) in Fig. 6). In the positive emotion condition, the virtual agent had the palms facing upward with the arm and chest opened (see (b) in Fig. 6). We used a voice recording of a female in her 20s, congruent with the appearance of the virtual agent. To make the expression as natural and believable as possible, we guided the voice actor to speak as similar as possible to the visual appearance. The tone and manner were congruent with the dialog script. We designed the virtual ...
Context 4
... conducted a one-way ANOVA on the facial muscle intensity but found no significant difference between the different empathic capabilities in the negative emotion condition. However, in the positive emotion condition, we found a significant difference in the facial muscle intensity in AU45 (Blink) (p < .5, F = 3.737) (see Fig. 16). We conducted a post hoc Games-Howell test and found a significant difference between the low and high empathy (p < .01), the mid vs. high empathy group (p < .05), and the low vs. mid empathy group (p < ...