Figure - available from: Frontiers in Psychology
This content is subject to copyright.
The uncanny valley confirmed: average scores with standard errors on a variety of measures. (A) Human likeness across participants, (B) uncanniness, (C) comfortability with robotic applications, (D) emotion recognition performance, and (E) average pupil size between 1 and 3 s after stimulus onset per robot character. The robot characters are ordered by scores on human likeness and the last character (outmost right) shows the average score pooled across all human characters. The most relevant statistical comparisons between robots are indicated with asterisks (∗p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.001).

The uncanny valley confirmed: average scores with standard errors on a variety of measures. (A) Human likeness across participants, (B) uncanniness, (C) comfortability with robotic applications, (D) emotion recognition performance, and (E) average pupil size between 1 and 3 s after stimulus onset per robot character. The robot characters are ordered by scores on human likeness and the last character (outmost right) shows the average score pooled across all human characters. The most relevant statistical comparisons between robots are indicated with asterisks (∗p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.001).

Source publication
Article
Full-text available
Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny...

Similar publications

Article
Full-text available
Physical proximity is important in social interactions. Here, we assessed whether simulated physical proximity modulates the perceived intensity of facial emotional expressions and their associated physiological signatures during observation or imitation of these expressions. Forty-four healthy volunteers rated intensities of dynamic angry or happy...

Citations

... PLEA functions as an observant robot, reacting emotionally to the user's mood during exchanges [22,23], acting as an emotional mirror that accurately reflects the interacting person's emotional state. Its theoretical basis is inspired by the Media Equation Theory [24,25,26,27,28], suggesting that humans attribute humanlike qualities to computers and other media, treating them as social entities. Postinteraction interviews with users have unveiled nuanced factors that shape social interactions and contribute to human well-being. ...
Conference Paper
The emergence of Artificial Intelligence (AI) marks a significant milestone in innovations, particularly with the advent of Virtual Beings (VBs) and Mixed Reality. VBs have transitioned from rudimentary programmed characters to elaborate, interactive entities capable of sophisticated human engagement. Enhanced with emotional intelligence, adaptive learning, and context-sensitivity, VBs offer nuanced interactions within both digital and real-world settings. A key breakthrough in this field is the development of affective VBs, which possess the ability to comprehend and react to human emotions, challenging the traditional view of AI as emotionless and strictly logical. This evolution prompts a reexamination of AI’s societal role and the dynamics of Human-Computer Interaction. This study focuses on the complexities of VBs, particularly through the implementation of a virtual being named PLEA, manifested in both worlds: the virtual and the physical one through a robotic head. It discusses the utility of such agents in various applications and employs ethnographic communication methodologies for data collection and analysis to unearth interaction patterns. Additionally, it examines human reactions to PLEA through a user-centered design approach, highlighting interactions based solely on facial expressions between PLEA and human participants. This investigation aims to lay the groundwork for developing multidisciplinary methods to collect, analyze, and abstract data from real-time interactions and feedback sessions, advancing the discourse on AI’s integration into human social environments.
... They found that pupil dilation can predict a person's individual bias in observing a robot, arguing that people with a mentalistic bias stick more to their first impressions and thus use less mental effort to interpret robot behavior than mechanistically biased people. Reuten et al. [33] used pupillometry to research the uncanny valley and media equation theory of robotic and human faces expressing basic emotions. Their results show that nearly human-like robots evoke weaker pupil dilation, making emotional expressions more challenging to recognize. ...
... Investigating mental effort concerning understanding expressive nonverbal communication by robots is essential to establishing intuitive processing of robot behavior. Several studies within psychology have focused on using pupillometry to assess arousal [10], [33] and mental effort [32]. However, these studies did not explicitly consider the intuitive human understanding of robot behavior. ...
Article
Full-text available
Robots are becoming part of our social landscape. Social interaction with humans must be efficient and intuitive to understand because nonverbal cues make social interactions between humans and robots more efficient. This study measures mental effort to investigate factors influencing the intuitive understanding of expressive nonverbal robot motions. Using an eye tracker to measure pupil response and gaze, fifty participants were asked to watch eighteen short video clips featuring three different types of robots performing expressive robot behaviors. Our findings indicate that the appearance of the robot, the viewing angle, and the expression shown by the robot all influence the cognitive load. Therefore, they may affect the intuitive understanding of expressive robot behavior. Furthermore, we found differences in the fixation time for different features of the various robots. With these insights, we identified possible improvement directions for making interactions between humans and robots more efficient and intuitive.
... In the literature, the uncanny valley effect has been mainly measured by subjective questionnaires [24,36]. However, changes in more objective measures, such as weaker pupil size dilation [46], heightened avoidance [58], electrodermal activity, and heart rate [10], have also been observed. In particular, the heightened psychophys-iological measures and avoidance found in previous studies might imply that the sIgA increase found in the control group of the present study might be caused by uncanny valley effects. ...
Conference Paper
Full-text available
Previous work suggests that the mere visual perception of disease cues displayed in 2D videos or photos can proactively enhance mucosal immune responses even without actual pathogen exposure. In this paper, we present the first immersive immunological experiment, which investigates if social interactions with virtual agents in virtual reality (VR) can lead to a mucosal immune response, in particular, a proactive release of secretory immunoglobin A(sIgA) in saliva. Therefore, we simulated a virtual bus stop scenario of enhanced airborne contagion risk in which participants were required to closely approach and establish eye contact with ten agents in two conditions. In the first (i.e., contagion) condition, seven of the ten agents sneezed directly before smiling or at predefined intervals. The second (i.e., control) condition used the same agents but without sneezes. We tested 70 healthy participants in a between-subjects design, measured changes in salivary sIgA, as well as subjectively perceived disgust and contagion risk, and assessed their sense of presence and cybersickness in the VE. We found that sIgA secretion increased in both scenarios, while in the control scenario, this increase also correlated with the perceived involvement and sense of presence in the VE. This suggests that the intimate social interactions with virtual agents were sufficient to trigger increased sIgA secretion regardless of sneezing. Hence, VR can be used to provoke proactive immune responses in laboratory experiments.
... While multiple psychological mechanisms underlying the cubic relation between human likeness and emotional impressions have been proposed and investigated, there is little consensus on the exact processes (Wang et al., 2015;Reuten et al., 2018;Kätsyri et al., 2019;Zhang et al., 2020;Diel and MacDorman, 2021). Categorization difficulty or ambiguity has been proposed to cause uncanniness in entities lying at the borders between human and robot categories (Yamada et al., 2013;Cheetham et al., 2014). ...
Article
Full-text available
The uncanny valley describes the typically nonlinear relation between the esthetic appeal of artificial entities and their human likeness. The effect has been attributed to specialized (configural) processing that increases sensitivity to deviations from human norms. We investigate this effect in computer-generated, humanlike android and human faces using dynamic facial expressions. Angry and happy expressions with varying degrees of synchrony were presented upright and inverted and rated on their eeriness, strangeness, and human likeness. A sigmoidal function of human likeness and uncanniness (“uncanny slope”) was found for upright expressions and a linear relation for inverted faces. While the function is not indicative of an uncanny valley, the results support the view that configural processing moderates the effect of human likeness on uncanniness and extend its role to dynamic facial expressions.
... Scholars have, for example, found that participants experienced stronger negative feelings when watching a video after seeing a harmful interaction with a robot than positive feelings after seeing a friendly interaction (Menne & Schwab, 2018;Rosenthal-von der Pütten et al., 2013). Reuten et al. (2018) observed stronger emotional reactions if participants saw robot faces with expressions of negative feelings, compared to robot faces suggestive of positive feelings. However, literature about distinct reactions depending on the type of consequence when interacting with a technology is still relatively scarce. ...
Article
Increasing adoption of AI-enabled technology, such as digital voice assistants (VA), in the workplace raises questions about the consequences for employees of collaborating with such new technology. This study thus theorizes about employees’ perceptions of autonomy, organizational support, and psychological costs when receiving help from VA, a co-worker, or a conventional computer (technology); and analyzes the implications of these perceptions on job satisfaction. In a between-subject online vignette experiment, 225 participants assessed a workplace situation where they received help from one of the three sources. Participants collaborating with the VA or the co-worker perceived less autonomy and more psychological costs than those who collaborated with conventional technology. In contrast, regarding perceived organizational support, receiving help from the VA or the computer led to less perceived organizational support than receiving help from a co-worker. Further, autonomy and organizational support were significantly related to job satisfaction. Results show that the indirect effect on job satisfaction of receiving help from a VA differs from the indirect effects of receiving help from a human co-worker or conventional technology. These findings have important implications for organizations’ understanding of how the introduction of new technologies may influence the perceived job satisfaction of employees.
... Scholars have, for example, found that participants experienced stronger negative feelings when watching a video after seeing a harmful interaction with a robot than positive feelings after seeing a friendly interaction (Menne & Schwab, 2018;Rosenthal-von der Pütten et al., 2013). Reuten et al. (2018) observed stronger emotional reactions if participants saw robot faces with expressions of negative feelings, compared to robot faces suggestive of positive feelings. However, literature about distinct reactions depending on the type of consequence when interacting with a technology is still relatively scarce. ...
... It is generally accepted that pupil dilation is a result of noradrenergic locus coeruleus activity (Breton-Provencher & Sur, 2019;Joshi, Li, Kalwani, & Gold, 2016;Liu, Rodenkirch, Moskowitz, Schriver, & Wang, 2017;Murphy, O'Connell, O'Sullivan, Robertson, & Balsters, 2014;Reimer et al., 2016). Fluctuations in pupil size can be an indication of different mental processes (for a review, see: Binda & Murray, 2015;Joshi & Gold, 2020;Mathot, 2018;Strauch, Wang, Einhäuser, Van der Stigchel, & Naber, 2022), such as memory load (Kahneman, 1966), cognitive effort (Alós-Ferrer, Jaudas, & Ritschel, 2021), conflict processing (van Steenbergen & Band, 2013), surprise (Preuschoff, t Hart, & Einhauser, 2011), and (emotional) arousal (Reimer et al., 2014;Reuten, van Dam, & Naber, 2018;Vinck, Batista-Brito, Knoblich, & Cardin, 2015). All are related to sympathetic nervous system activity (Bradley, Miccoli, Escrig, & Lang, 2008). ...
Article
Full-text available
Previous studies used gaze behavior to predict product preference in value-based decision-making, based on gaze angle variables such as dwell time, fixation duration and the first fixated product. While the application for online retail seems obvious, research with realistic web shop stimuli has been lacking so far. Here, we studied the decision process for 60 Dutch web shops of a variety of retailers, by measuring eye movements and pupil size during the viewing of web shop images. The outcomes of an ordinal linear regression model showed that a combination of gaze angle variables accurately predicted product choice, with the total dwell time being the most predictive gaze dynamic. Although pupillometric analysis showed a positive relationship between pupil dilation and product preference, adding pupil size to the model only slightly improved the prediction accuracy. The current study holds the potential to substantially improve retargeting mechanisms in online marketing based on consumers' gaze information. Also, gaze-based product preference proves to be a valuable metric in pre-testing product introductions for market research and prevent product launches from failure.
... Shih et al. [18] designed a robot that extracts the face and computes the facial image feature using a support vector machine (SVM) to classify the facial expression into different emotional states. Then, to measure pupillary responses toward robots and their relation with the Uncanny Valley, Reuten et al. [19] conducted a study with 40 participants recording their pupil size when being exposed to different images of robots and humans expressing emotions such as happiness, sadness, angriness, fear, and neutral state, also suggesting if the robots looked uncanny and humanlike, obtaining three factors of analysis, human likeness, canniness, and interaction, to recreate the Uncanny Valley effect. Concluding that psychological responses toward humans and robots are parallel. ...
... For positive emotions, we distinguished the robot's facial expressions into nine animated faces that are interested (1), proud (3), happy (5), expecting (6), confident (10), active (11), pleased (12), shy (20), and singing (22). For negative emotions, we distinguished the robot's facial expression into eight animated faces that are doubting (2), impatient (9), helpless (13), serious (14), worried (15), lazy (17), tired (19), and innocent (21). In this portion, the animated faces (13) and (19) were almost identical, and it was hard to recognize which emotion. ...
... For negative emotions, we distinguished the robot's facial expression into eight animated faces that are doubting (2), impatient (9), helpless (13), serious (14), worried (15), lazy (17), tired (19), and innocent (21). In this portion, the animated faces (13) and (19) were almost identical, and it was hard to recognize which emotion. Therefore, we concluded animated faces (13) and (19) as one animated face (19) in the negative emotion portion. ...
Article
Full-text available
In this paper, we investigate the relationship between emotions and colors by showing robot animated emotion faces and colors to the participants through a series of surveys. We focused on representing a visualized emotion through a robot's facial expression and background colors. To complete the emotion design with animated faces and color background, we gave an experimental design for surveying the users' thoughts. We took an example of a robot animated face by using the ASUS Zenbo. We selected 11 colors as our color background and 24 facial expressions from Zenbo. To analyze our results from questionnaires, we used histograms to show the basic data situation and the multiple logistic regression analysis (MLRA) to see the marginal relationships. We separated our questionnaires into positive and negative questionnaires and divided the dataset into three cases to discuss the different relationships between color and emotion. Results showed that people preferred the blue color no matter whether the face was showing positive or negative emotion. The MLRA also showed the correct percentage is outstanding in case 2, either positive emotion or negative emotion. Participants thought Zenbo's robotic animated face was the same as they thought. Through our experimental design, we hope that people can consider more colors with emotion to design the human–robot interface that will be closer to the users' thoughts and make life more colorful with comfortable reactions with robots.
... The familiarity measurement model of humans' reaction to PC of agents was established by applying the social model of the Uncanny Valley [4,56]. The model structure is shown in Fig. 4 according to the connotation of the Uncanny Valley theory. ...
Article
Full-text available
This study explored the emotional influence of pupillary change (PC) of robots with different human-likeness levels on people. Images of the eye areas of five agents, including one human and four existing typical humanoid robots with varying human-likeness levels, were edited into five 27-s videos. In the experimental group, we showed five videos with PC applied to the eyes of agents to 31 participants, and in the control group, five videos without PC were shown to another 31 participants. Afterward, the participants were asked to rate their feelings about the videos. The results showed that PC did not change people’s emotions towards agents independently. However, PC applied to the eyes of a robot representing an agent of no threat who may evoke empathy subconsciously enhanced people’s positive emotions, while PC applied to human images increased people’s negative emotions and reduced the feeling of familiarity.
... Next, the method we adopted to study the uncanny valley in the present research was restricted to self-report ratings. However, there are many other methods available to examine the uncanny valley, such as in-person observation of interactions with a robot (Becker-Asano et al., 2010), eye-gaze tracking (Shimada et al., 2006), pupillary responses (Reuten et al., 2018), reaction time measures (Cheetham et al., 2011(Cheetham et al., , 2015 Looser & Wheatley, 2010), brain imaging (Cheetham et al., 2011(Cheetham et al., , 2015Saygin et al., 2012), and so forth. In the future, therefore, it would be necessary to use these alternative methods to validate the current findings. ...
Preprint
Full-text available
The uncanny valley hypothesis describes how increased human-likeness of artificial entities, ironically, could elicit a surge of negative reactions from people. Much research has studied the uncanny valley hypothesis, but little research has sought to examine people's reactions to a broad range of human-likeness manifested in real-world robots. We focused on examining people's emotional responses to real-world, as opposed to hypothetical, robots because these robots impact real-life human–robot interactions. We measured both positive and negative emotional responses to a large collection of full-body images of robots (N = 251) with various human-like features. We found evidence for the existence of not one, but two uncanny valleys. Mori's uncanny valley emerged for high human-like robots and a second uncanny valley emerged for moderately low human-like robots. We attributed these valleys to unique combinations of perceptual mismatches between human-like features, specified by a match between surface and facial feature dimensions accompanied by a mismatch with the body-manipulator dimension. We also found that patterns of the uncanny valleys differed between positive (shinwakan) and negative (bukimi) emotional responses. Lastly, the word uncanny appeared to be an unreliable measure of the uncanny valley. Implications for robot design and the uncanny valley research are discussed.