Fig 2 - uploaded by Touradj Ebrahimi
Content may be subject to copyright.
Screenshot of the Web interface for subjective emotion assessment.  

Screenshot of the Web interface for subjective emotion assessment.  

Source publication
Article
Full-text available
Assessing emotional states of users evoked during their multimedia consumption has received a great deal of attention with recent advances in multimedia content distribution technologies and increasing interest in personalized content delivery. Physiological signals such as the electroencephalogram (EEG) and peripheral physiological signals have be...

Similar publications

Article
Full-text available
Emotion recognition plays an essential role in human-computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition frame...
Article
Full-text available
Patients with Parkinson's disease (PD) can exhibit a reduction of spontaneous facial expression, designated as "facial masking," a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity) of PD by automatically recogniz...

Citations

... Some of the most used stimuli are musical videos [18], [48], [63]- [65], visual stimuli [23], [40], [41], [45], [53], [54], [67], images [31], [33], [36], [46], [50], [51], [68], [69], Audio [29], [34], [37], [52], task based stimuli [24], [25], [27], [38], [39], [42]- [44], [48], [55]- [57], [58], [62], [70]- [75], film clips [76], [77], normal video clips [26], [62], [78]. In task-based stimuli either the subjects are instructed to do the mental task (mathematics-related problems, memorizing, computer-based gaming, and reading) or physical task (coldpressor test, rope skipping, surgical task, and fatigue exercise). ...
... Table 1 and Table 2 show the various application types (psycho-physiological state) and stimuli used in every study. Then a subjective assessment is done using various selfevaluation method namely Likert scale [46], [54], Roken Arousal Scale (RAS) [52], Self-Assessment Mannikin (SAM) [18], [23], [48], [63], [64], [78] Karolinska Sleepiness Scale (KSS) [27], Borg Rating of Perceived Exertion scale (RPE) [25]. In a post-stimulus situation, participants are brought back to the neutral state. ...
... Both single and multichannel bio-signals are extracted at different sampling rates at a range of 30-2048 Hz. In some cases those signals are downsampled to reduce processing time [25], [26], [30], [39], [43], [53], [54], [57], [64], [73], [78]. Among database-based emotion-related articles, five papers have used a dimensional This article has been accepted for publication in IEEE Access. ...
Article
Full-text available
The interaction between the central nervous system (CNS) and peripheral nervous system (PNS) governs various physiological functions, influences cognitive processes and emotional states. It is necessary to unravel the mechanisms governing the interaction between the brain and the body, enhancing our understanding of physical and mental well-being. Neuro-ergonomics-based human-computer interaction can be improved by comprehending the intricate interrelation between the CNS and PNS. Various studies have been explored using diverse methodologies to study CNS-PNS interaction in specific psychophysiological states, such as emotion, stress, or cognitive tasks. However, there is a need for a thorough, extensive, and systematic review covering diverse interaction forms, applications, and assessments. In this work, an attempt has been made to perform a systematic review that examines the interaction between the CNS and PNS across diverse psychophysiological states, focusing on varied physiological signals. For this, scientific repositories, namely Scopus, PubMed, Association for Computing Machinery, and Web of Science, are accessed. In total, 61 articles have been identified within the period of January 2008 to April 2023 for systematic review. The selected research articles are analyzed based on factors, namely subject information, stimulation modality, types of interactions between the brain and other organs, feature extraction techniques, classification methods, and statistical approaches. The evaluation of the existing literature indicates a scarcity of publicly available databases for CNS-PNS interaction and limited application of machine learning and deep learning-based advanced tools. Furthermore, this review underscores the urgent need for enhancements in several key areas including the development of a more refined psycho-physiological model, improved analysis techniques, and better electrode-surface interface technology. Additionally, there is a need for more research involving daily life activities, female-oriented studies, and privacy considerations. This review contributes to standardizing protocols, improves the diagnostic relevance of various instruments, and extracts more reliable biomarkers. The novelty of this study lies in guiding researchers to point out various issues and potential solutions for future research in the field of bio-signal-based CNS-PNS interaction.
... However, what is less considered in navigation is how these very same environmental affordances, and the environment at large, can induce affective responses. Affordances can dictate where you can and cannot go, but can also be alluring or repellent-these affective responses, through aesthetic processing, therefore can drive behaviours, physically and metaphorically drawing us in or away (Yazdani et al., 2012). How an inhabitant feels when entering a space-whether confident or unsettled, happy or scared-may factor into navigational choices and abilities. ...
Book
The concept of affordances is being increasingly used in fields beyond ecological psychology to reveal previously unexplored interdisciplinary relationships. These fields include engineering, robotics, artificial intelligence, neuroscience, urban theory, architecture, computer science, and much more. As the concept is adapted for its relational meaning between an agent and the environment, or object, the meaning of the term has changed to fit the customs of the adapting field. This book maps the different shades of the term and brings insights into how it is operationalized by providing short accessible essays regardless of background. Each contribution addresses big questions around this topic such as the application of the concept on ongoing research, how to measure or identify affordances, as well as other reflective questions about the future of affordances in the field. The book is envisioned to be read by non-experts, students, and researchers from several disciplines, and fills the need for summarization across disciplines. As the many adaptations flourished from the same psychological concept, this book also aims to function as a catalyst and motivation for reinterpreting the concepts for new directions. Compared to existing books, this book aims not to span the vertical dimension of field by taking a deep dive into a niche-field—instead, this book aims to have a wide horizontal span highlighting a common concept shared by an increasing number of fields, namely affordances. As such, this book takes a different approach by attempting to summarize the different emerging applications and definitions of the concept, and make them accessible to non-experts, students, and researchers regardless of background and level.
... However, what is less considered in navigation is how these very same environmental affordances, and the environment at large, can induce affective responses. Affordances can dictate where you can and cannot go, but can also be alluring or repellent-these affective responses, through aesthetic processing, therefore can drive behaviours, physically and metaphorically drawing us in or away (Yazdani et al., 2012). How an inhabitant feels when entering a space-whether con dent or unsettled, happy or scared-may factor into navigational choices and abilities. ...
Chapter
Full-text available
Successful navigation often requires detecting and exploiting a range of affordances in the environment. These can be visible affordances such as a path enabling efficient travel, a landmark to distinguish the direction, or a boundary to locate a goal. Other affordances require greater integration of sensory information with stored knowledge, such as determining that a novel path will be a shortcut, or being able to infer that the current region of the environment has strong global connections due to its long line of sight and central location in the broader space. This essay reviews studies exploring affordances for navigation and their neural underpinnings. We highlight recent discoveries indicating a role of the occipital place area in detecting path affordances, the retrosplenial cortex in landmark processing, and the hippocampus in processing path connections. Finally, we extend our consideration to affordances of an environment that impact affect, where layout or features could induce negative affect such as fear, and the reverse where affordances can enhance positive affect by, for example, offering a sense of safety. Such alterations in the emotional state may impact navigation and learning of an environment, and we suggest new avenues for research to explore this.KeywordsSpatial navigationPathsBoundariesLine of sightLandmarks
... However, what is less considered in navigation is how these very same environmental affordances, and the environment at large, can induce affective responses. Affordances can dictate where you can and cannot go, but can also be alluring or repellent-these affective responses, through aesthetic processing, therefore can drive behaviours, physically and metaphorically drawing us in or away (Yazdani et al., 2012). How an inhabitant feels when entering a space-whether confident or unsettled, happy or scared-may factor into navigational choices and abilities. ...
Chapter
In this essay, we investigate human relationships to Land that are implied by the notion of affordances. Some expressions can be read as supporting a logic of extraction: affordances are aspects of the environment, lying ready to be used, without any responsibility for care or reciprocation from the user of the affordance. Thinking with Indigenous philosophies and creation stories, we explore the possibility of grounding affordances in an alternative logic: the logic of the gift. The offerings of the environment, of which humans are a part, need to be reciprocated by practices of care and gratitude: only when Land is cared for and protected will it continue to offer its affordances.KeywordsAffordanceIndigenous philosophiesSolicitationLogic of the giftRelationality
... In this approach, various modalities are combined to overcome the weaknesses of each individual modality. Combining different physiological signals for emotion recognition (Yazdani et al., 2012;Shu et al., 2018) or fusing only behavioral modalities have been widely explored (Busso et al., 2008;McKeown et al., 2011). Recently some studies tried to improve emotion recognition methods by exploiting both physiological and behavioral techniques (Zheng et al., 2018;Huang et al., 2019;Zhu et al., 2020). ...
Article
Full-text available
Emotions are multimodal processes that play a crucial role in our everyday lives. Recognizing emotions is becoming more critical in a wide range of application domains such as healthcare, education, human-computer interaction, Virtual Reality, intelligent agents, entertainment, and more. Facial macro-expressions or intense facial expressions are the most common modalities in recognizing emotional states. However, since facial expressions can be voluntarily controlled, they may not accurately represent emotional states. Earlier studies have shown that facial micro-expressions are more reliable than facial macro-expressions for revealing emotions. They are subtle, involuntary movements responding to external stimuli that cannot be controlled. This paper proposes using facial micro-expressions combined with brain and physiological signals to more reliably detect underlying emotions. We describe our models for measuring arousal and valence levels from a combination of facial micro-expressions, Electroencephalography (EEG) signals, galvanic skin responses (GSR), and Photoplethysmography (PPG) signals. We then evaluate our model using the DEAP dataset and our own dataset based on a subject-independent approach. Lastly, we discuss our results, the limitations of our work, and how these limitations could be overcome. We also discuss future directions for using facial micro-expressions and physiological signals in emotion recognition.
... Similarly, the feeling of pleasure is considered one of the most important facets of aesthetic experience . Crucially for the built environment context, not only are emotions central to the valuation of the aesthetic object, but they also drive behavioursattracting or repelling us, both physically and metaphorically (Yazdani, Lee, Vesin, & Ebrahimi, 2012). Thus, aesthetic emotions play a key role influencing decision making, including how we choose to enter and move through spaces (Brown, Gao, Tisdelle, Eickhoff, & Liotti, 2011). ...
Article
Full-text available
When studying architectural experience in the lab, it is of paramount importance to use a proxy as close to real-world experience as possible. Whilst still images visually describe real spaces, and virtual reality allows for dynamic movement, each medium lacks the alternative attribute. To merge these benefits, we created and validated a novel dataset of valenced videos of first-person-view travel through built environments. This dataset was then used to clarify the relationship of core affect (valence and arousal) and architectural experience. Specifically, we verified the relationship between valence and fascination, coherence, and hominess - three key psychological dimensions of architectural experience which have previously been shown to explain aesthetic ratings of built environments. We also found that arousal is only significantly correlated with fascination, and that both are embedded in a relationship with spatial complexity and unusualness. These results help to clarify the nature of fascination, and to distinguish it from coherence and hominess when it comes to core affect. Moreover, these results demonstrate the utility of a video dataset of affect-laden spaces for understanding architectural experience.
... Moreover, these results demonstrate the utility of a video dataset of affectladen spaces for understanding architectural experience. : bioRxiv preprint repelling us, both physically and metaphorically (Yazdani et al., 2012). Thus, aesthetic emotions play a key role influencing decision making, including how we choose to enter and move through spaces (Brown et al., 2011). ...
Preprint
Full-text available
When studying architectural experience in the lab, it is of paramount importance to use a proxy as close to real-world experience as possible. Whilst still images visually describe real spaces, and virtual reality allows for dynamic movement, each medium lacks the alternative attribute. To merge these benefits, we created and validated a novel dataset of valenced videos of first-person-view travel through built environments. This dataset was then used to clarify the relationship of core affect (valence and arousal) and architectural experience. Specifically, we verified the relationship between valence and fascination, coherence, and hominess - three key psychological dimensions of architectural experience which have previously been shown to explain aesthetic ratings of built environments. We also found that arousal is only significantly correlated with fascination, and that both are embedded in a relationship with spatial complexity and unusualness. These results help to clarify the nature of fascination, and to distinguish it from coherence and hominess when it comes to core affect. Moreover, these results demonstrate the utility of a video dataset of affect- laden spaces for understanding architectural experience. Highlights - A new database of videos showing first-person-view journeys through built environments is developed. - We explored how core affect and architectural experience relate through these videos. - Previous findings are corroborated showing valence ties to fascination, coherence and hominess. - Arousal correlates only with fascination, and not coherence or hominess. - Arousal and fascination are tied to spatial complexity and unusualness.
... Although single physiological modalities have been widely used, combining these signals can produce even better results. For example, Yazdani et al. [93] have used EEG, BVP, BT, RR, EOG, Electrooculography (EOG), and Electromyography (EMG) for affect recognition in people watching videos, achieving the highest accuracy of 61.7% and 53.3% in subject-independent approach using EEG signals. In a similar work, Yang et al. [92] used physiological data to recognize the emotions of people playing video games. ...
Conference Paper
Full-text available
Emotions are complicated psycho-physiological processes that are related to numerous external and internal changes in the body. They play an essential role in human-human interaction and can be important for human-machine interfaces. Automatically recognizing emotions in conversation could be applied in many application domains like health-care, education, social interactions, entertainment, and more. Facial expressions, speech, and body gestures are primary cues that have been widely used for recognizing emotions in conversation. However, these cues can be ineffective as they cannot reveal underlying emotions when people involuntarily or deliberately conceal their emotions. Researchers have shown that analyzing brain activity and physiological signals can lead to more reliable emotion recognition since they generally cannot be controlled. However, these body responses in emotional situations have been rarely explored in interactive tasks like conversations. This paper explores and discusses the performance and challenges of using brain activity and other physiological signals in recognizing emotions in a face-to-face conversation. We present an experimental setup for stimulating spontaneous emotions using a face-to-face conversation and creating a dataset of the brain and physiological activity. We then describe our analysis strategies for recognizing emotions using Electroencephalography (EEG), Photoplethysmography (PPG), and Galvanic Skin Response (GSR) signals in subject-dependent and subject-independent approaches. Finally, we describe new directions for future research in conversational emotion recognition and the limitations and challenges of our approach.
... Yazdani et al. [5], have proposed a method for estimating changes in the user's emotions caused by watching a video based on some vital data such as skin temperature and brain waves . In this regard, measuring a large number of vital data such as skin temperature and EEG requires the user to wear many electrodes and sensor devices, which significantly restricts the user's actions and movements. ...
Conference Paper
Recognizing user emotions while watching videos is critical for video content modification and personalisation. Many research attempted to objectively assess the viewer's perception by utilizing different devices and vital data. However, such equipment might add to the strain and psychological stress of watching a movie, affecting the overall experience. Smartwatches now have pulse rate measuring technologies, and wearing one has no negative impact on daily life. Therefore, in this research, we proposed using a smartwatch to objectively assess emotions while watching a video. For that purpose, in this study, we clarified the relationship between heart rate variability (HRV) measured with a smartwatch and emotions when watching videos. We measured HRV from 10 people watching alone a 10-minute horror video and extracted 11 features from both frequency and time domain to predict two emotions 'Fear' and 'Fearlessness Situation' while watching a horror movie. We evaluated the effect of feature extraction, window characteristics and machine learning model to evaluate the prediction accuracy As a result, the Support Vector Machine (SVM) model, fed with the features extracted from the 60 seconds following the horror event, obtained the best average F1 value of 90%.
... Recognizing the emotions of users while they watch videos freely in indoor and outdoor environments can enable customization and personalization of video content [2,3]. Although previous work has focused on emotion recognition for video watching, they are typically restricted to static, desktop environments [1,4,5], and focus on recognizing one emotion per video stimuli [6][7][8]. For the latter case, such emotion recognition is temporally imprecise since it does not capture the time-varying nature of human emotions [9,10]: users can have and report multiple emotions while watching a single video. ...
Article
Full-text available
Recognizing user emotions while they watch short-form videos anytime and anywhere is essential for facilitating video content customization and personalization. However, most works either classify a single emotion per video stimuli, or are restricted to static, desktop environments. To address this, we propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using only wearable, physiological signals (e.g., electrodermal activity, heart rate). CorrNet takes advantage of features both inside each instance (intra-modality features) and between different instances for the same video stimuli (correlation-based features). We first test our approach on an indoor-desktop affect dataset (CASE), and thereafter on an outdoor-mobile affect dataset (MERCA) which we collected using a smart wristband and wearable eyetracker. Results show that for subject-independent binary classification (high-low), CorrNet yields promising recognition accuracies: 76.37% and 74.03% for V-A on CASE, and 70.29% and 68.15% for V-A on MERCA. Our findings show: (1) instance segment lengths between 1–4 s result in highest recognition accuracies (2) accuracies between laboratory-grade and wearable sensors are comparable, even under low sampling rates (≤64 Hz) (3) large amounts of neutral V-A labels, an artifact of continuous affect annotation, result in varied recognition performance.