Fig 2 - uploaded by Kaveh Bakhtiyari
Content may be subject to copyright.
Plutchik's ''Emotion Wheel'' [4]  

Plutchik's ''Emotion Wheel'' [4]  

Source publication
Article
Full-text available
Emotions play an important role in human interactions. They can be integrated into the computer system to make human-computer interaction more effective. Affective Computing is an innovative computational modelling and detecting user’s emotions to optimize system responses in Human-Computer Interaction (HCI). However, there is a trade-off between r...

Context in source publication

Context 1
... classification of emotions is done by Plutchik, which is known as Plutchik's emotion wheel [4]. In this classification, eight emotions are described in a circular way that each emotion is close by properties of the next emotion by an angle. These emotions are illustrated in Fig. ...

Similar publications

Conference Paper
Full-text available
Biometric systems have been applied to improve the security of several computational systems. These systems analyse physiological or behavioural features obtained from the users in order to perform authentication. Biometric features should ideally meet a number of requirements, including permanence. In biometrics, permanence means that the analysed...

Citations

... Thinking in the terms of the process-based approach [4], AfC systems rely on a sequence of three separate, deliberate and systematic processes [ Taking into account the context of the subject matter, data collection is the process of gathering and measuring data on affective variables by using one or more input devices (modalities). Typically, hardware equipment is used such as: digital cameras [18], microphones [3], motion sensing devices [29], as well as data collectors connected to the central or peripheral nervous systems [48]. ...
Article
Full-text available
Affective computing (AfC) is a continuously growing multidisciplinary field, spanning areas from artificial intelligence, throughout engineering, psychology, education, cognitive science, to sociology. Therefore, many studies have been devoted to the aim of addressing numerous issues, regarding different facets of AfC solutions. However, there is a lack of classification of the AfC systems. This study aims to fill this gap by reviewing and evaluating the state-of-the-art studies in a qualitative manner. In this line of thinking, we put forward a threefold classification that breaks down to desktop and mobile AfC systems, and AfC machines. Moreover, we identified four types of AfC systems, based on the features extracted. In our opinion, the results of this study can serve as a guide for future affect-related research and design, on the one hand, and provide a better understanding on the role of emotions and affect in human-computer interaction, on the other hand.
... It can also be used for emotion recognition [4,41,42], which is the focus of this paper. There have been some review papers [43,44,[46][47][48][49] related to emotion recognition from KMT dynamics. Some [43,44,46,47] were published in 2013-2015, reflecting on some earlier research, while the recent paper [48] focused on advances and applications of KMT dynamics, and paper [49] only reviewed the research over the last decade. ...
... There have been some review papers [43,44,[46][47][48][49] related to emotion recognition from KMT dynamics. Some [43,44,46,47] were published in 2013-2015, reflecting on some earlier research, while the recent paper [48] focused on advances and applications of KMT dynamics, and paper [49] only reviewed the research over the last decade. These studies summarized general information about emotion recognition procedures, features, datasets, and classification methods, but did not provide answers to further questions we concerned (See Table I). ...
... The categorical approach describes emotions in a discrete way, such as the seven basic emotional states: neutral, happy, sad, angry, surprised, scared, and disgusted [5]. The dimensional approach describes emotions from three dimensions: arousal, valence and dominance [44]. Arousal is defined as the energy of feelings such as low (sleepy) and high (excited); valence describes the level of an emotion from positive to negative; while dominance describes the feelings of control, ranging from lack of control to in control [45]. ...
Article
Full-text available
Emotion can be defined as a subject’s organismic response to an external or internal stimulus event. The responses could be reflected in pattern changes of the subject’s facial expression, gesture, gait, eye-movement, physiological signals, speech and voice, keystroke, and mouse dynamics, etc. This suggests that on the one hand emotions can be measured/recognized from the responses, and on the other hand they can be facilitated/regulated by external stimulus events, situation changes or internal motivation changes. It is well-known that emotion has a close relationship with both physical and mental health, usually affecting an individual’s and a team’s work performance, thus emotion recognition is an important prerequisite for emotion regulation towards better emotional states and work performance. The primary problem in emotion recognition is how to recognize a subject’s emotional states easily and accurately. Currently, there are a body of good research on emotion recognition from facial expression, gesture, gait, eye-tracking, and other physiological signals such as speech and voice, but they are all intrusive and obtrusive to some extent. In contrast, keystroke, mouse and touchscreen (KMT) dynamics data can be collected non-intrusively and unobtrusively as secondary data responding to primary physical actions, thus, this paper aims to review the state-of-the-art research on emotion recognition from KMT dynamics and to identify key research challenges, opportunities and a future research roadmap for referencing. In addition, this paper answers the following six research questions (RQs): (1) what are the commonly used emotion elicitation methods and databases for emotion recognition? (2) which emotions could be recognized from KMT dynamics? (3) what key features are most appropriate for recognizing different specific emotions? (4) which classification methods are most effective for specific emotions? (5) what are the application trends of emotion recognition from KMT dynamics? (6) which application contexts are of greatest concern?.
... They are also expensive and are neither easily available nor usable in daily life. In recent studies, NLP and voice recognition have not yielded satisfactory results due to cultural and linguistic differences [10]. ...
... Other methods used keyboard keystrokes with mouse movements on any computer and achieved high accuracy of behavior recognition. These input devices are also cheap and easily carried [10]. ...
Article
Full-text available
After the COVID-19 pandemic, no one refutes the importance of smart online learning systems in the educational process. Measuring student engagement is a crucial step towards smart online learning systems. A smart online learning system can automatically adapt to learners’ emotions and provide feedback about their motivations. In the last few decades, online learning environments have generated tremendous interest among researchers in computer-based education. The challenge that researchers face is how to measure student engagement based on their emotions. There has been an increasing interest towards computer vision and camera-based solutions as technology that overcomes the limits of both human observations and expensive equipment used to measure student engagement. Several solutions have been proposed to measure student engagement, but few are behavior-based approaches. In response to these issues, in this paper, we propose a new automatic multimodal approach to measure student engagement levels in real time. Thus, to offer robust and accurate student engagement measures, we combine and analyze three modalities representing students’ behaviors: emotions from facial expressions, keyboard keystrokes, and mouse movements. Such a solution operates in real time while providing the exact level of engagement and using the least expensive equipment possible. We validate the proposed multimodal approach through three main experiments, namely single, dual, and multimodal research modalities in novel engagement datasets. In fact, we build new and realistic student engagement datasets to validate our contributions. We record the highest accuracy value (95.23%) for the multimodal approach and the lowest value of “0.04” for mean square error (MSE).
... Because these devices work on the different planes separated from displays, additional markers such as cursors and pointers are needed. On the other hand, more intuitive input interfaces called touchscreens have been studied to directly interact with displays by touching displays [71][72][73][74]. Touchscreen technologies can be categorized into finger-touch and stylus-touch methods. ...
Article
Full-text available
Touchscreens have been studied and developed for a long time to provide user-friendly and intuitive interfaces on displays. This paper describes the touchscreen technologies in four categories of resistive, capacitive, acoustic wave, and optical methods. Then, it addresses the main studies of SNR improvement and stylus support on the capacitive touchscreens that have been widely adopted in most consumer electronics such as smartphones, tablet PCs, and notebook PCs. In addition, the machine learning approaches for capacitive touchscreens are explained in four applications of user identification/authentication, gesture detection, accuracy improvement, and input discrimination.
... Since some of the selected papers include mouse movements along with typing patterns, we investigated only the accuracy rates of keystroke dynamics. Table III reveals that the best performance results are inspected in [29] and [36]. In [29], an accuracy rate of 91.24% was achieved with 4.35% FPR using SVM for "fright" emotion. ...
... Table III reveals that the best performance results are inspected in [29] and [36]. In [29], an accuracy rate of 91.24% was achieved with 4.35% FPR using SVM for "fright" emotion. In [36], Logistic regression, SVM, Nearest neighbor, C4.5, and Random Forest were applied to the data. ...
... The best accuracy rates were obtained using KNN reaching 84% for arousal and 83% for valence. However, we should note that the dataset used in [29] is very small (4 emotions) compared to [36] (14 emotional states using the Arousal-Valence model). ...
... The basis of its application is based on the radial distribution of renal tubules, which leads to anisotropy of diffusion. The renal medulla diffusion tractography (DTT) post-processing technique can display densely arranged radiation beams in the renal medulla and can visually show the tubular walking and distribution patterns (Bakhtiyari, Taghavi, and Husain 2015). ...
Article
Full-text available
In the diagnosis of hydronephrosis, there are cases where pathological symptoms and causes are difficult to judge. In order to improve the ultrasound diagnosis of hydronephrosis, based on CNN neural network technology, this paper analyzes the traditional image resolution processing algorithm by contrast analysis method, and proposes a fast image super-resolution reconstruction algorithm. The method improves the image processing efficiency and combines the CNN neural network to effectively treat the hydronephrosis ultrasound image. In addition, this study analyzes the clinical effects of the algorithm by establishing a controlled trial. The analysis shows that the results of this study can obtain effective diagnosis results, that is, the diagnosis results of this study can provide an effective reference for the diagnosis of hydrocephalus ultrasound images, which has certain clinical significance.
... cognitive, physiological, expressive, and motivational aspects. Some researchers classified the emotions based on their requirements on the research (Bakhtiyari et al., 2015). Typically, emotions are classified into basic emotions and multi-dimensional emotions. ...
Article
Full-text available
Detecting student's engagement is an important key to improve an e-learning system. An e-learning system adapted to learner emotions is considered as an innovative system. Among the challenges that face researcher is how to measure student's engagement depending on their emotions. During the few years, several solutions were proposed to measure student's engagement, but few solutions detect engagement level without consider if the student is learning or not. In this paper, we reviewed the current works of emotions and engagement level of student. According to that, we built our engagement level and linked them with the appropriate emotions. Then, we propose an affective model and a new process to detect final engagement level. The efficiency of the proposed Affective Model is shown experimentally by conducting a series of experiments. Firstly, we compute the Matching Score (MS) and Miss-matching Score (MisMS) for each engagement level. Secondly, we apply the new engagement level detection process on severe cases. Thirdly, we analyze all emotions in each level of engagement to detect strong emotions. We record matching score (MS) in range [71.2%, 100%]. Finally, we proposed some suggestions to improve the affective model.
... The link between touchscreen based typing and mood changes was investigated in [76], which uses digraphs and trigraphs and takes into account other touchscreen related features, such as the number of hands used for typing. In [77], the use of a touchscreen is also combined with keyboard and mouse features for affective computing. Indeed, there is a vast literature evaluating different data sources as data for affective state detectors [34]. ...
Article
Full-text available
There is strong evidence that emotions influence the learning process. For this reason, we explore the relevance of individual and general mouse and keyboard interaction patterns in real-world settings while learners perform free text tasks. To this end we have modeled users’ mouse movements and keystroke dynamics with data mining techniques, building on previous related research and extending it in terms of some critical modeling issues that may have an impact on detection results. Inspired by practice in affective computing where physiological sensors are used, we argue for the creation of an interaction baseline model, as a reference point in the way how learners interact with the keyboard and mouse. To make the proposed affective model feasible, we have adopted a simplified two-dimensional self-labeling approach for labeling the users’ affective state. Our approach to affect detection improves results when there is a small amount of data instances available and does not require additional affect-oriented tasks from the learners. Specifically, learners are only asked to self-reflect their emotional state after finishing the tasks and immediately selecting two values in the affect scale. The approach we have followed aims to distill two types of interaction patterns: 1) within-subject patterns (from a single participant) and 2) between-subject patterns (across all participants). Doing this, we aim to combine both approaches as modeling factors, thus taking advantage of individual and general interaction patterns to predict affect.
... item features, user demographic data and time) or a direct user feedback, such as ratings to items made by users. (2) Implicit: the input data are based on the behavioral usage such as a user's purchase behavior, browser session location, number of times a user has heard a song or detection of user's feeling about the song [18]. ...
Article
Full-text available
Promoting recommender systems in real-world applications requires deep investigations with emphasis on their next generation. This survey offers a comprehensive and systematic review on recommender system development lifecycles to enlighten researchers and practitioners. The paper conducts statistical research on published recommender systems indexed by Web of Science to get an overview of the state of the art. Based on the reviewed findings, we introduce taxonomies driven by the following five phases: initiation (architecture and data acquisition techniques), design (design types and techniques), development (implementation methods and algorithms), evaluation (metrics and measurement techniques) and application (domains of applications). A layered framework of recommender systems containing market strategy, data, recommender core, interaction, security and evaluation is proposed. Based on the framework, the existing advanced humanized techniques emerged from computational intelligence and some inspiring insights from computational economics and machine learning are provided for researchers to expand the novel aspects of recommender systems.
... It is expected to find a model between these features. Therefore, the user's perception and emotion can be predicted while he/she is interacting with a PC without being explicitly questioned [25,26]. The results of this research can be integrated as a Decision Support System (DSS) into the computational advertising systems to recommend the online ads in more effective forms of presentation. ...
Conference Paper
Full-text available
Online advertising is a rapidly growing area with high commercial relevance. This paper investigates the effect of different types of ad presentation, varying in frame size, position and animation level on visual intrusiveness and annoyance as perceived by users. Furthermore, we investigate the influence of users’ emotional states on perceived intrusiveness and annoyance. This research has been carried out through a survey study. The analysis of the data shows a linear correlation between the visual attention of the ads and its features. Also, a positive influence of emotion has been found on various types of ad presentations. In addition, the participants with emotions of positive valence and low arousal showed more tolerance to the same ad as the users with a different emotional state. This research proposes a new aspect in computational advertising to adapt the recommendations based on the user’s emotional state and the parameters of the online advertisements.