Figure 2 - uploaded by Pedro Henrique Gomes
Content may be subject to copyright.
Face mapping process by FaceTracker. 

Face mapping process by FaceTracker. 

Source publication
Conference Paper
Full-text available
There is an increasing number of studies in the area of Human-Computer Interaction (HCI) that bears witness to the importance of taking account of emotional factors in interactions with computer systems. By getting to know the emotions of the users, it is possible for artificial agents to have an influence on human feelings with a view to stimulati...

Context in source publication

Context 1
... entails collecting data from certain types of sensors: electrodermal activity, atmospheric air temperature and heart rate variability. In a similar way, [20] carried out a study with the aim of predict- ing a user’s different emotional states through physiological reactions and comparing them with three types of computing models. Three classification algorithms were employed - Decision Rule-based, k-Nearest Neighbor (kNN) and Tree Decomposition - to build the prediction models with the physiological features that had been obtained. Another approach that is adopted entails the detection and classification of emotions through the user’s behavioral tendencies. [7] offers a computing solution to assist the users in controlling and overcoming the emotional state of frustration. The system requests information about an individual’s emotional state at every moment during an interaction and provides feedback, especially with regard to emotional factors. The purpose of this is to encourage the user to feel at ease and tell him that he is not the only person to experience this kind of feeling. In a similar way, [11] employs a model for mood recognition which is based on six sources of information: Short Message Service (SMS), e-mails, phone calls, the use of applications, navigation on the web and localization. According to [11] automatic mood detection allows new applications to be created and an improvement in those that already exist. The research study proposed by [1] sets out a model based on the following classification algorithms - Linear Regression and Artifical Neural Networks - for the analysis of emotions in real time that are based on motor expressions and physiological reactions. One of the sensors carried out the recognition of facial emotions while another conducted an analysis of the heart rate variability and the user’s electrodermal activity. Five specialists conducted an analysis of the videos with the aim of defining the classes (outputs) for the classification algorithms, for example by determining whether the user was depressed at a particular time. In a similar way, the identification of emotions based on facial features was investigated by [9]. In carrying this out, geometric features such as the shapes of the facial elements (the eyes, mouth, nostrils, chin and eyebrows) and the location of predefined facial points were used to create six distinct facial representations. Statistical tests showed that the facial representations con- taining information about the differences of the face when compared with its neutral state and also the ones which combine this information with single frame-based features led to greater improvements in the experimental results. In light of the foregoing, it can be noted that there has been an increasing number of studies devoted to exploring techniques that can develop systems that employ sensors to identify a particular emotion felt by a user of computing devices. In addition, these studies also claim that the classification of emotions allows decisions to be made with greater precision with regard to the interaction between the users and their computing devices. However, it should be stressed that there is a lack of studies that use EC for the classification of emotions. This is the reason that we have sought to fill in a gap in the literature and offset the error of simply employing a single classification technique. As stated earlier, this has involved setting out and evaluating a EC to identify the users’ emotions on the basis of their facial expressions. Since there has been a great interest in the scientific literature to find alternative methods of identifying and classifying emotions, we have set out a model based on EC for undertaking this task by analyzing the facial expressions of users. The aim of this model is to employ a facial recognition application ( FaceTracker [17]) and discover the user’s emotion through examining a combination of responses from the classification algorithms. Methods based on geometric features are used in facial modelling (motor expressions) with a view to adopting an approach which resembles the way that human beings interpret the different parts of the face. Thus, different facial representations (a neutral state, joy, sadness, fear, anger and surprise) are recommended for the identification of feelings, since they are able to encode the facial configuration dis- played during the expression of an emotion. FaceTracker [17] is characterized as a system of computer vision that is used to obtain information about facial features. This makes use of an optimization strategy which employs a linear approxi- mation by adjusting reference-points in consistent locations, to record a model designed to work within fixed parameters; it is based on a face model used as a benchmark and con- sisting of 66 feature points. The algorithm seeks to align the elements of the face being analyzed with the feature points of the reference model. Fig. 2 provides an example of the mapping of a face carried out by FaceTracker. For this reason, we made an alteration [9] to the FaceTracker algorithm so that it mapped a subset of the 66 points that were initially obtained and only used 33 feature points of the face. The purpose of this was to reduce computational costs through eliminating possible redundancy. The aim of including elements in the proposed facial rep- resentation is to model the facial features that are closely related to movements, which according to psychologists are caused by emotional expression [5]. The elements are based on 33 facial points: eight mapping the mouth, six for each of the eyes, three for each eyebrow and the chin, two for the nostrils and two for delineating the lateral extremities of the face near the eyes. As well as modeling the shape of the eyes, the mouth and facial regions caused by emotional muscular movements, eight other areas are mapped. In addition, the distances and the angles that the line connecting two distinct points makes with the horizontal axis are obtained at all possible combinations of the points. This creates a rep- resentation with dimensionality D 1 = 2 • 33 + 8 + 2 • 528 = 1130. Although widely used, it is difficult for the classifiers to employ ML methods that can achieve 100% degree of accuracy [19]. This is due to the fact that the performance of each method depends on adjustments of several parameters, as well as the degree of difficulty of each particular problem. The range of configurations that can be given to these parameters results in different classification decisions. The decision to make a better configuration is usually made after several classifiers have been formed with different configurations. After this, these classifiers are tested and the classifier with the best performance is chosen, while the others are rejected. The selection of a single classifier means that a significant amount of potentially useful information is rejected. For this reason, the concept of EC has been found to be an ideal solution for the development of high performance systems in the area of pattern recognition [19]. This fact is based on the premise that the combination of classifiers can lead to an improved performance in terms of a better generalization and/or an increase in accuracy. One of the factors that ensures a good performance of a EC is the degree of diversity found within its components [4]. Before the classifiers can be regarded as diverse in themselves, they must possess a non-zero correlation in error terms, or in other words, they must not make mistakes in the same patterns. Thus, to ensure that a Ensemble can achieve an acceptable performance, it must be made up of classifiers that have a reasonable degree of accuracy and not be sub- ject to coincident failures so that the errors of a classifier can be corrected through the output from all the other components [4, 19]. Fig. 3 shows the framework for a classification module based on the face when it is combined with a EC. Thus the proposal will employ the following classification techniques which are increasingly being employed for the analysis of emotional responses and the development of the EC: kNN, Decision Tree or Logistic Regression, Fuzzy Logic, Bayesian Networks and Support Vector Machine (SVM) [3, 14, 16, 20]. On the basis of the results of the facial mapping carried out by FaceTracker as input, the first layer of the process (Fig. 3) consists of individual classifiers that can have different kinds of architecture but a common form of outputs, such as an ordering of candidate classes. The second layer of the process involves decision-making which makes use of the results of the previous layer to reach a general decision for the EC. It should be noted that Equation 1 was used to reach the final decision of the Ensemble. This involved a weighting procedure in which proportional weights were assigned to each algorithm in accordance with its degree of accuracy. This normalized weighting is calculated with Equation 1, where x is n-th classifier and W x is the weight associated to its classification. Further, AR corresponds to the average rate of accuracy of the algorithm x . Finally, i and n are the first and last classifier, ...

Similar publications

Preprint
Full-text available
While AI has benefited humans, it may also harm humans if not appropriately developed. The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI syst...

Citations

... Using SVM and LDA classifiers, Bhardwaj (2015) used EEG signals to classify emotions with an average overall accuracy of 74.13% and 66.50% [62]. Mano proposed an ensemble model for emotion classification based on motor facial expressions that achieved greater accuracy than a single classifier [63]. ...
Article
Full-text available
This study presents a comprehensive literature review on the convergence of affective computing, interactive installation art, multi-dimensional sensory stimulation, and artificial intelligence (AI) in measuring emotional responses, demonstrating the potential of artificial intelligence in emotion recognition as a tool for sustainable development. It addresses the problem of understanding emotional response and measurement in the context of interactive installation art under artificial intelligence (AI), emphasizing sustainability as a key factor. The study aims to fill the existing research gaps by examining three key aspects: sensory stimulation, multi-dimensional interactions, and engagement, which have been identified as significant contributors to profound emotional responses in interactive installation art. The proposed approach involves conducting a process analysis of emotional responses to interactive installation art, aiming to develop a conceptual framework that explores the variables influencing emotional responses. This study formulates hypotheses that make specific predictions about the relationships between sensory stimulation, multi-dimensional interactions, engagement, and emotional responses. By employing the ASSURE model combined with experimental design, the research methodology ensures a systematic and comprehensive study implementation. The implications of this project lie in advancing the understanding of emotional experiences in interactive installation art under AI, providing insights into the underlying mechanisms that drive these experiences, and their influence on individual well-being from a sustainable perspective. The contributions of this research include bridging the identified research gaps, refining theoretical frameworks, and guiding the design of more impactful and emotionally resonant interactive artworks with sustainability in mind. This research seeks not only to fill the existing gaps in understanding emotional experiences in interactive installation art, but also to guide the development of immersive and emotionally engaging installations, ultimately advancing the broader field of human–computer interaction, promoting individual well-being, and contribute to sustainable development.
... The system uses four machine learning algorithms (SVM, KNN, random forest and classication, and regression trees), and the best accuracy rates were obtained using the KNN and SVM algorithms. Another studies [47], [70] used ensemble classiers to enhance the precision of emotion classication. Multiple kernel learning (MKL) was shown to boost the classication performance by using multiple kernels rather than a single xed kernel. ...
Article
Full-text available
Facial expression analysis aims to understand human emotions by analyzing visual face information and is a popular topic in the computer vision community. In educational research, the analyzed students’ affect states can be used by faculty members as feedback to improve their teaching style and strategy so that the learning rate of all the students present can be enhanced. Facial expression analysis has attracted much attention in educational research, and a few reviews on this topic have emerged. However, previous reviews on facial expression recognition methods in educational research focus mostly on summarizing the existing literature on emotion models from a theoretical perspective, neglecting technical summaries of facial expression recognition. In order to advance the development of facial expression analysis in educational research, this paper outlines the tasks, progress, challenges, and future trends related to facial expression analysis. First, facial expression recognition methods in educational research lack an overall framework. Second, studies based on the latest machine learning methods are not mentioned in previous reviews. Finally, some key challenges have not been fully explored. Therefore, unlike previous reviews, this systematic review summarizes two kinds of educational research methods based on facial expression recognition and their application scenarios. Then, an overall framework is proposed, along with various kinds of machine learning methods and published datasets. Finally, the key challenges of face occlusion and the expression uncertainty problem are presented. This study aims to capture the full picture of facial expression recognition methods in educational research from a machine learning perspective.
... We used the approach of Mano et al. (2015) and Mano (2018) for the FaceTracker algorithm, since it maps a subset of the 66 points initially obtained and uses only 33 feature points. The purpose is to reduce computational costs by eliminating possible redundancy. ...
... As in Mano et al. (2015) and Mano (2018), we also considered the distances between two distinct points and the angles made by the line connecting them with the horizontal axis, all of them obtained in all possible combinations of the points. As a result, a representation of dimensionality D 1 = 2 • 33 + 8 + 2 • 528 = 1.130 is created. ...
... Despite their extensive use, the classifiers generated by ML methods rarely achieve 100% accuracy (Mano et al. 2015;Mano 2018), since their performance depends on several parameter settings, the completeness of the data sample used for the training, and the degree of difficulty associated with the particular problem. The selection of a single classifier involves rejection of a significant amount of potentially useful information. ...
Article
Full-text available
Several studies in the field of human–computer interaction have focused on the importance of emotional factors related to the interaction of humans with computer systems. According to the knowledge of the users’ emotions, intelligent software can be developed for interacting and even influencing users. However, such a scenario is still a challenge in the field of human–computer interaction. This article endeavors to enhance intelligence in such types of systems by adopting an ensemble-based model that is able to identify and classify emotions. We developed a system (music player) that can be used as a mechanism to interact and/or persuade someone to “change” his/her current emotional state. In order to do this, we also designed a generic model that accepts any kind of interaction or persuasion mechanism (e.g., preferred YouTube channel videos, games, etc.) to be deployed at runtime based on the needs of each user. We showed that the approach based on a genetic algorithm for the weight assignment of the ensemble achieved an accuracy average of 80%. Moreover, the results showed a 60% increase in the level of user’s satisfaction regarding the interaction with users’ emotions.
... As emoções básicas conhecidas são a alegria, a aversão, o medo, a neutralidade, a raiva, a surpresa e a tristeza (12)(13) , além do estado neutro, também considerado e utilizado como referência de estados emocionais. Nesse sentido, as representações de emoções têm sido utilizadas em várias aplicações computacionais com bom desempenho (11,(14)(15) . O largo espectro de aplicações e o constante aumento da capacidade de processamento computacional vêm motivando pesquisadores a identificar emoções de usuários em diversos contextos comerciais e de investigação, além de utilizar essas informações como base, por exemplo, para a tomada de decisão, análise de satisfação e comportamento na execução de tarefas (15) . ...
... (12) . Neste estudo foi utilizado um sistema de software (14) , que é caracterizado como um sistema (14) . O modelo utilizado neste estudo foi testado e validado em estudos anteriores (14)(15)20 (11) . ...
... (12) . Neste estudo foi utilizado um sistema de software (14) , que é caracterizado como um sistema (14) . O modelo utilizado neste estudo foi testado e validado em estudos anteriores (14)(15)20 (11) . ...
Article
Full-text available
Objective: to compare the effect of exposure to unpleasant odors in a simulated clinical environment on the emotions of undergraduate nursing students. Method: quasi-experimental study. A total of 24 nursing students participated the study, divided into two groups, 12 in the intervention group with exposure to unpleasant odors, and 12 in the control group without exposure to unpleasant odors. To simulate the unpleasant vomiting odor in intervention group, fermented foods were used: boiled oats, curdled milk, spoiled Parmesan cheese, raw egg, pea soup, raisins and vinegar. Participants were filmed and the facial expression analysis was performed at six critical points: student approach; report of the complaint; clinical evaluation; and patient occurrence, intervention and reevaluation based on what was proposed by the Circumplex model of emotions recognition. Results: a total of 83,215 emotions related to the six critical points were verified. At the critical point of the proposed scenario with exposure to unpleasant odors, the intervention group presented the basic emotion of sadness and the Control Group, anger. Conclusion: it is inferred that the inclusion of unpleasant odors in the simulated scenarios can broaden the emotional development of health students.
... (i) Face analysis (FA): the task undertaken for face identification and facial expression analysis is based on the work carried out by Mano et al. [23] which entails the analysis of geometric characteristics and the approach adopted by the Classification Committee for classifying the emotional state of the user. Figure 3 shows the face-mapping procedure, which is followed at the distances and angles obtained for all possible combinations of points, to show the lines connecting two distinct points with the horizontal axis, which provides a representation with a dimensionality of 1,130 attributes. ...
... In turn, the classification module carries out a classification task based on face-mapping attributes. is module aims to use the face-mapping procedure and, through a combination of response values of classification algorithms, identify and categorize the user's emotion so that computing systems can interact more assertively with the user's emotional state [3,23]. (ii) Heart rate analysis (HRA): the procedure for heart rate mapping is based on a study by Richter et al. e study evaluates the relationship between heart rate variation (beats per minute (bpm)) and the patient's emotional state and its connection with health problems, especially hypertension [24]. ...
... C l s ← sendLocalServer(D 1 ); (5) C c ← sendCloud(D 1 ); (6) storeLocalDevice(C l d); (7) storeLocalServer(C l s); (8) storeCloudCost(C c ); (9) else (10) eC l d ← estimateLocalDevice( ); (11) eC l s ← estimateLocalServer( ); (12) eC c ← estimateCloudProcessing( ); (13) if eC l d ≤ eC l s then (14) C l d ← processingLocal(D i ); (15) storeLocalDevice(C l d); (16) else if eC l s ≤ eC c then (17) C c ← sendLocalServer(D i ); (18) storeLocalServer(C l s); (19) else (20) C c ← sendCloud(D i ); (21) storeCloudCost(C c ); (22) end (23) end (24) end ALGORITHM 1: Algorithm for decision-making of load distribution between layers. Mobile Information Systems 7 ...
Article
Full-text available
Owing to the increase in the number of people with disabilities, as a result of either accidents or old age, there has been an increase in research studies in the area of ubiquitous computing and the Internet of Things. They are aimed at monitoring health, in an efficient and easily accessible way, as a means of managing and improving the quality of life of this section of the public. It also involves adopting a Health Homes policy based on the Internet of Things and applied in smart home environments. This is aimed at providing connectivity between the patients and their surroundings and includes mechanisms for helping the diagnosis and prevention of accidents and/or diseases. Monitoring gives rise to an opportunity to exploit the way computational systems can help to determine the real-time emotional state of patients. This is necessary because there are some limitations to traditional methods of health monitoring, for example, establishing the behavior of the user’s routine and issuing alerts and warnings to family members and/or medical staff about any abnormal event or signs of the onset of depression. This article discusses how a layer-based architecture can be used to detect emotional factors to assist in healthcare and the prevention of accidents within the context of Smart Home Health. The results show that this process-based architecture allows a load distribution with a better service that takes into account the complexity of each algorithm and the processing power of each layer of the architecture to provide a prompt response when there is a need for some intervention in the emotional state of the user.
... Thus, emotional aspects have been studied for a long time in the field of Psychology (Lichtenstein et al. 2008). Only in recent years has research on the subject significantly increased in Computing and areas such as Affective Computing and Human-Computer Interaction though (Picard 2003;Bailenson et al. 2008;Zhou et al. 2011;Peter and Urban 2012;Mano et al. 2015Mano et al. , 2016a. In view of this observation, current research reveals the importance of taking into account the emotional aspects in the interaction between users and computer systems (Oinas-Kukkonen 2013; Mano et al. 2015Mano et al. , 2016aMano 2018). ...
... Only in recent years has research on the subject significantly increased in Computing and areas such as Affective Computing and Human-Computer Interaction though (Picard 2003;Bailenson et al. 2008;Zhou et al. 2011;Peter and Urban 2012;Mano et al. 2015Mano et al. , 2016a. In view of this observation, current research reveals the importance of taking into account the emotional aspects in the interaction between users and computer systems (Oinas-Kukkonen 2013; Mano et al. 2015Mano et al. , 2016aMano 2018). ...
... Rights reserved. et al. 2013;Becker et al. 2015;Mano et al. 2015;Nardelli et al. 2015;Arigbabu et al. 2016;Mano et al. 2016a, b;Patwardhan and Knapp 2016;Fuentes et al. 2017;Mano 2018) has generated a wide range of methods capable of extracting behavioral patterns from the individual to infer emotional aspects. ...
Article
Full-text available
Current research in human–computer interaction reveals the importance of taking into account emotional aspects in the interaction with computer systems. The main objective of promoting this emotional recognition is to contribute to the enhanced coherence, consistency and credibility of the computer reactions and responses during human interaction through the human–computer interface. In that context, computer systems can be explored that can identify and classify the user’s emotional condition. In this study, computer techniques are used to identify and classify emotional aspects based on the users’ discourse, aiming to assess the emotional behavior in the daily reality of critical care professionals. Hence, in the study of computer techniques and psychological theories, both areas can be related to classify emotion through a framework for the acquisition and classification of users’ discourse. The system developed was applied as an additional evaluation method in the development of simulated scenarios in the context of Clinical Simulation and demonstrated its efficiency as a new approach in health assessment.
... Different facial representations were included for the identification and classification of emotions which comprised the basic emotions (happiness, fear, anger, a neutral state, surprise and sadness) (Ekman, 1973). This also involved codifying the facial setting of an individual (including the mouth, eyes, eyebrows and nose) by making use of characteristic "points" and geometrical elements, like angles, distances or areas that are used for geometrical facial representation (Mano et al., 2016(Mano et al., , 2015. The classification of emotions shown by the facial expression is based on computational algorithms that interpret universal patterns of emotions. ...
... The classification of emotions shown by the facial expression is based on computational algorithms that interpret universal patterns of emotions. The term "universal" indicates that the same muscular movements are produced by face expressions (Mano et al., 2016(Mano et al., , 2015. This approach is supported by cross-cultural studies (Ekman, 1973), which suggest that human beings express basic emotions conveyed by facial expressions in the same way, regardless of the culture, ethnicity and customs of the people concerned. ...
... -Computational Assessment of Emotions -this is about the identification and classification of the face on the basis of the computational model designed by Mano et al. (2015) and (Mano et al., 2016). The aim of the approach is to model the facial parts that are essential for the movements and expression of emotions 33 . ...
Article
Full-text available
Clinical simulation allows discussions about improving the quality on the patient's care. This method have effectiveness on what concerns to satisfaction, self-confidence and student motivation. However, during the assessment, the students have emotional reactions that have tended to be overlooked. In view of this, this article seeks to identify and describe the relationship of the emotions observed by facial expressions and assess their degree of satisfaction and self-confidence by carrying out simulated practices among the nursing students. The analysis based on the scales showed high satisfaction and self-confidence levels, and it was found that the predominant basic emotion was anger, which is caused by other correlated emotions like tension and stress. This divergence between the identified emotions opens up space for further investigations about the level of motivation and the stimulus tolearning that these emotions can provide, and the extent to which they can lead to satisfaction and self-confidence.
... All other emotional categories are constructed based on a combination of these basic emotions. Continuing in this direction, Mano et al. (2015) added a neutral state, a facial expression with no apparent emotion, using it as a reference point for detecting emotional states. ...
... It is important to note that both representations of emotions have been used in several computer applications with good results (Scherer, 2005;Mano et al., 2015Mano et al., , 2016. The broad spectrum of applications and the constant increase in the capacity of computer processing have been motivating researchers to attempt to identify emotions and use this information in the analysis of decision-making, satisfaction, and task execution. ...
... During the development of the scenario, video cameras were placed to record the student's faces for posterior analysis and determination of the emotions displayed. The analysis consisted of three steps (Mano et al., 2015): ...
Article
Simulation-based assessment relies on instruments that measure knowledge acquisition, satisfaction, confidence, and the motivation of students. However, the emotional aspects of assessment have not yet been fully explored in the literature. This dimension can provide a deeper understanding of the experience of learning in clinical simulations. In this study, a computer (software) model was employed to identify and classify emotions with the aim of assessing them, while creating a simulation scenario. A group of (twenty-four) students took part in a simulated nursing care scenario that included a patient suffering from ascites and respiratory distress syndrome followed by vomiting. The patient's facial expressions were recorded and then individually analyzed on the basis of six critical factors that were determined by the researchers in the simulation scenario: 1) student-patient communication, 2) dealing with the patient's complaint, 3) making a clinical assessment of the patient, 4) the vomiting episode, 5) nursing interventions, and 6) making a reassessment of the patient. The results showed that emotion recognition can be assessed by means of both dimensional (continuous models) and cognitive (discrete or categorical models) theories of emotion. With the aid of emotion recognition and classification through facial expressions, the researchers succeeded in analyzing the emotions of students during a simulated clinical learning activity. In the study, the participants mainly displayed a restricted affect during the simulation scenario, which involved negative feelings such as anger, fear, tension, and impatience, resulting from the difficulty of creating the scenario. This can help determine which areas the students were able to master and which caused them greater difficulty. The model employed for the recognition and analysis of facial expressions in this study is very comprehensive and paves the way for further use and a more detailed interpretation of its components.
... Related to the employment of intelligent techniques on the assessment of emotions, Mano et al. (2015) present how an assemble of classifiers can be employed to assess emotion from images, and Kahou et al. (2015) evaluate an intelligent approach for a sequence of images (i.e., video). Lan et al. (2014) and Hou et al. (2015) present evaluations taking into account the use of an EEG sensor. ...
Article
Full-text available
Users’ emotional states influence decision making and are essential for the knowledge and explanation of users’ behavior with computer applications. However, collecting emotional states during the interaction time with users is a onerous task because it requires very careful handling of the empirical observation, leading researchers to carry out assessments of emotional responses only at the end of the interaction. This paper reports our research in assessing users’ behavior at interaction time and also describes the results of a case study which analyzed users’ emotional responses while interacting with a game. We argue that capturing emotions during interaction time can help us in making changes on users’ behavior (e.g., changing from stressed to a less stressed state) or even suggesting an user to have a break. This can be all possible if both (1) emotions are captured during interaction and (2) changes are suggested at runtime (e.g., through persuasion). The results of this study suggest that there are significant differences between emotional responses captured during the interaction and those declared at the end.
... Com base em um modelo de referência facial composto por 66 pontos característicos, o algoritmo busca alinhar os elementos da face em análise aos pontos característicos do modelo de referência. Nesse sentido, o algoritmo do FaceTracker foi modificado de modo a mapear somente um subconjunto de 33 dos 66 pontos faciais obtidos inicialmente [15], [32], o que diminui o custo computacional com a eliminação de possíveis redundâncias. A Figura 3 apresenta um exemplo do mapeamento da face realizada pelo algoritmo modificado do FaceTracker. ...
... Para a classificação de emoções diversas são as técnicas de classificação aplicadasà análise de respostas emocionais, sendo elas: kNN,Árvore de Decisão, Lógica Fuzzy, Redes Bayesianas e SVM [1], [2], [35], [36], [37]. No entanto, alguns trabalhos abordam o conceito de Comitê de Classificação, do Inglês Ensemble of Classification (EC) [29], [32], [38]. Esse fato se deveà premissa de que a combinação de classificadores pode conduzir a uma melhoria no desempenho em termos de melhor generalização e/ou de aumento da acurácia [39]. ...
... Ruby on Rails foi a linguagem de programação escolhida para a implementação devidoà experiência com tal linguagem. Seguindo a estrutura proposta, o desenvolvimento da página Web para a interação foi dividido em módulos distintos, descritos a seguir: [32], com acurácia de 80,53%. ...
Article
Full-text available
Current studies in the field of Human-Computer Interaction highlight the relevance of emotional aspects while interacting with computers systems. It is believed that by allowing intelligent agents to identify user’s emotions it becomes possible to induce and arouse emotions itself in order to stimulate the users in their activities. One of the great challenges for researchers is to provide systems capable of recognizing, interpreting and reacting intelligently and sensitively to the emotions of the user. In this sense, we propose an effective interaction between the music player and the user that is based on emotional recognition through the interpretation of facial expressions. Therefore, this project aims to develop and evaluate a system that can identify the user's emotional state and provide a persuasive mechanism to change it (with a case study in music player). Also, explore the flexible approach in persuasion through persuasive mechanisms that may vary between a music player, games and/or videos. Thus, throughout the study, the model based on the Classification Committee was efficient in identifying the basic emotions and the satisfaction of the users through the application with the music player.