Fig 1 - uploaded by Mehmet Akif Özdemir
Content may be subject to copyright.
a) Presented window of GUI to users b) SAM in designed GUI.

a) Presented window of GUI to users b) SAM in designed GUI.

Source publication
Conference Paper
Full-text available
Emotions are complex and may vary from person to person in a situation. The purpose of this study is to perform emotion analysis by using specific signal processing algorithms, to find the features and channels that are effective in the emotion recognition by using 60 visual stimuli with obtained EEG signals from the 32-channel EEG device that is b...

Contexts in source publication

Context 1
... was designed to display visual stimuli at certain time intervals. Specific visuals, commands and warnings are used to make it easier for users to understand and complete the experiment more easily. The 3 different parts are designed in GUI as visual, auditory and video. However, only the visual and auditory parts were used. Fig. 1. (a) shows the presented window of GUI to users. After each stimulus, Self Assessment Manikin (SAM) questionnaire has been performed in order to enable users to evaluate their feelings according to valence, arousal, dominance and liking on the GUI. Fig. 1. (b) shows the SAM questionnaire for users. In this SAM form, there are 4 scales ...
Context 2
... in GUI as visual, auditory and video. However, only the visual and auditory parts were used. Fig. 1. (a) shows the presented window of GUI to users. After each stimulus, Self Assessment Manikin (SAM) questionnaire has been performed in order to enable users to evaluate their feelings according to valence, arousal, dominance and liking on the GUI. Fig. 1. (b) shows the SAM questionnaire for users. In this SAM form, there are 4 scales rated from 1 to 9 and each stimulus is evaluated according to the degree of the emotions evoked by stimuli. Briefly the valance scale refers to unhappiness and happiness. Arousal scale refers to the state of calmness and excitement. Dominance refers to ...

Similar publications

Conference Paper
Full-text available
In the first part of this study, a brief overview about the basic design of the universal functions originator (UFO) is introduced without conducting any numerical experiment to evaluate its performance and explore its capabilities. This part of the study is allocated to cover the practical side of the proposed new AI computing system. For this mis...

Citations

... Classification algorithms can learn using different mathematical approaches. In consequence, reviewed works (e.g., [24,45,46]) mention the most usual classification algorithm methods based on the following: ...
Article
Full-text available
One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline.
... There are many studies using different traditional ML methods to classify emotions via EEG [11], [12]. Traditional machine learning methods require inputs that are extracted from signals. ...
... This is usually used to obtain features relevant to the emotional state of the EEG signals, and the process is grouped into 3 as [8], [29]: a) Time domain feature. This is based on the time domain of a signal, and some of it has been reviewed in previous studies, such as the mobility, complexity, and activity using Hjorth parameters [57], fractal dimension using the Higuchi method [58], [59], event-related potentials (ERP) features [60], and statistical feature [61]. b) Frequency domain feature. ...
Article
Full-text available
Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set.
... The experimental process of the study was explained in [10] in detail. 25 healthy volunteers participated in the experiment. ...
Conference Paper
Full-text available
Emotion recognition from EEG signals has gained a great research interest in brain-computer interface (BCI) studies. As the result of the outstanding success of deep neural networks in the image classification area, deep learning methods have become popular in the subject of emotion classification from EEG signals. In this study, we have used the Alexnet structure for the classification of emotions in Arousal and Valence domains separately. We generate TF images of 32-channel EEG data we collected by using Multivariate Synchrosqueezing Transform (MSST) and then these TF images are used to feed to the AlexNet model. A 3-fold cross-validation strategy was adopted to evaluate the robustness of the models. By training the AlexNet architecture an average accuracy of 71.60% is yielded on Arousal and an average accuracy of 67.93% is yielded on Valence. The results demonstrated that the proposed method achieved promising performance to classify emotions.
Article
Full-text available
This paper presents a novel approach for emotion recognition (ER) based on Electroencephalogram (EEG), Electromyogram (EMG), Electrocardiogram (ECG), and computer vision. The proposed system includes two different models for physiological signals and facial expressions deployed in a real-time embedded system. A custom dataset for EEG, ECG, EMG, and facial expression was collected from 10 participants using an Affective Video Response System. Time, frequency, and wavelet domain-specific features were extracted and optimized, based on their Visualizations from Exploratory Data Analysis (EDA) and Principal Component Analysis (PCA). Local Binary Patterns (LBP), Local Ternary Patterns (LTP), Histogram of Oriented Gradients (HOG), and Gabor descriptors were used for differentiating facial emotions. Classification models, namely decision tree, random forest, and optimized variants thereof, were trained using these features. The optimized Random Forest model achieved an accuracy of 84%, while the optimized Decision Tree achieved 76% for the physiological signal-based model. The facial emotion recognition (FER) model attained an accuracy of 84.6%, 74.3%, 67%, and 64.5% using K-Nearest Neighbors (KNN), Random Forest, Decision Tree, and XGBoost, respectively. Performance metrics, including Area Under Curve (AUC), F1 score, and Receiver Operating Characteristic Curve (ROC), were computed to evaluate the models. The outcome of both results, i.e., the fusion of bio-signals and facial emotion analysis, is given to a voting classifier to get the final emotion. A comprehensive report is generated using the Generative Pretrained Transformer (GPT) language model based on the resultant emotion, achieving an accuracy of 87.5%. The model was implemented and deployed on a Jetson Nano. The results show its relevance to ER. It has applications in enhancing prosthetic systems and other medical fields such as psychological therapy, rehabilitation, assisting individuals with neurological disorders, mental health monitoring, and biometric security.
Chapter
Depression is one of the most common mental disorders affecting 121 million people worldwide. Depression is more than a low mood and those who suffer from it can experience a lack of interest in daily activities, lack of concentration, low energy, feelings of worthlessness and in the worst cases, it can lead to suicide. For this reason, correct detection of the disorder is essential to reduce the number of cases of misdiagnosed people. In addition to psychological analysis, EEG signals are also one of the tools that help in the detection of mental disorders, such as depressive disorder. Therefore, the purpose of this study is to develop an algorithm for the detection of depressive disorder based on the classification of EEG signals. For this purpose, machine learning was used with the Welch method and four different classifiers, which are: LDA, LR, KNN and RFC. Also was used neural network that combines (IC-RNN) and (C-DRNN). Despite working with few data from only 26 depressed patients and 29 healthy patients, it could be obtained an accuracy of 57%.KeywordsDepressive disorderMedical diagnostic supportWelch methodEEG signalClassifiers
Article
Full-text available
Electroencephalography (EEG)-based emotion recognition is an important technology for human–computer interactions. In the field of neuromarketing, emotion recognition based on group EEG can be used to analyze the emotional states of multiple users. Previous emotion recognition experiments have been based on individual EEGs; therefore, it is difficult to use them for estimating the emotional states of multiple users. The purpose of this study is to find a data processing method that can improve the efficiency of emotion recognition. In this study, the DEAP dataset was used, which comprises EEG signals of 32 participants that were recorded as they watched 40 videos with different emotional themes. This study compared emotion recognition accuracy based on individual and group EEGs using the proposed convolutional neural network model. Based on this study, we can see that the differences of phase locking value (PLV) exist in different EEG frequency bands when subjects are in different emotional states. The results showed that an emotion recognition accuracy of up to 85% can be obtained for group EEG data by using the proposed model. It means that using group EEG data can effectively improve the efficiency of emotion recognition. Moreover, the significant emotion recognition accuracy for multiple users achieved in this study can contribute to research on handling group human emotional states.
Conference Paper
The performance of university students during their academic session are vital to their overall grade throughout their term in the university. There are multiple factors that could lead to the loss of performance but the foremost factor is their level of emotions. Previous research has shown that to determine the performance of the students, the best way to do so is by analysing their attention levels. With the development of portable Electroencephalogram (EEG) devices and machine learning algorithms, it is easy to obtain the students attention and emotion level during their academic sessions. This paper aims to present a method of obtaining the EEG signals using a portable EEG device and classifying it into the type of emotions that are present in the human brain. The EEG device will obtain the attention level and EEG signals during two scenarios which are lectures/tutorials and exams/quizzes. The signals are then compiled and analysed to determine the emotion labels based on a normalization process that categories the signals into positive or negative emotions. The dataset and labels are then used to train and evaluate multiple machine learning models and a deep learning model in order to determine which model has the best accuracy and performance. The chosen model is then used to predict the emotions of several students during both scenarios and the average emotions are then compared with their average attention to determine the effect of emotions on the students’ performance. Hence, this paper will first provide a method on obtaining the emotion labels, followed by the models’ development and finally correlating the predicted emotions with the students’ performance during their academic sessions.