Figure 1 - uploaded by Danny Plass-Oude Bos
Content may be subject to copyright.
Brain-computer interface cycle.

Brain-computer interface cycle.

Source publication
Article
Full-text available
Making the computer more empathic to the user is one of the aspects of aective computing. With EEG-based emotion recognition, the computer can actually take a look inside the user's head to observe their mental state. This paper describes a research project conducted to rec- ognize emotion from brain signals measured with the BraIn- quiry EEG PET d...

Contexts in source publication

Context 1
... brain-computer interfaces consist of very typi- cal components, each of which performs its own critical func- tion. Figure 1 shows the process cycle. First of all (1) a stim- ulus set and test protocol is needed. ...
Context 2
... go from EEG recording to actual classification results, many intermitting steps have to be performed, as shown in Figure 1: artifact filtering, feature extraction, classifier training, and of course the actual classification itself. ...
Context 3
... plots in Figures 9, 10, and 11 show the performance rates (correctly classified / total classified) for each of the classification categories, averaged over the binary classifica- tion for each class, and averaged, minimized and maximized respectively over the results for each number of PCA. ...
Context 4
... bandpass filtering the EEG- data to limit it to alpha and beta frequencies, the differ- ences are less apparent, but still there. The plots in Fig- ures 13, 14, 15, 16 and 17 show the full length recordings of each of the test subjects, to illustrate this (red is Fpz, blue is F3/F4). These differences may have a lot of impact on the ease with which they are analyzed. ...
Context 5
... based on visual inspection, no conclusive interpretations could be made. However, for • Modality: Figures 18, 19, 20, 21. ...

Citations

... The Fourier frequency analysis splits the raw EEG signals and removes the detected noises using the bandpass filter [97]. Bos et al. [98] used a bandpass filter made available from EEGLab for Matlab to eliminate noise and artefacts from EEG recordings. The moving average filter pre-processes RESP and EDA signals [99]. ...
Article
Full-text available
Automated Emotion Recognition Systems (ERS) with physiological signals help improve health and decision-making in everyday life. It uses traditional Machine Learning (ML) methods, requiring high-quality learning models for physiological data (sensitive information). However, automated ERS enables data attacks and leaks, significantly losing user privacy and integrity. This privacy problem can be solved using a novel Federated Learning (FL) approach, which enables distributed machine learning model training. This review examines 192 papers focusing on emotion recognition via physiological signals and FL. It is the first review article concerning the privacy of sensitive physiological data for an ERS. The paper reviews the different emotions, benchmark datasets, machine learning, and federated learning approaches for classifying emotions. It proposes a novel multi-modal Federated Learning for Physiological signals based on Emotion Recognition Systems (Fed-PhyERS) architecture, experimenting with the AMIGOS dataset and its applications for a next-generation automated ERS. Based on critical analysis, this paper provides the key takeaways, identifies the limitations, and proposes future research directions to address gaps in previous studies. Moreover, it reviews ethical considerations related to implementing the proposed architecture. This review paper aims to provide readers with a comprehensive insight into the current trends, architectures, and techniques utilized within the field.
... Our results also showed frontal-temporal relevance at 20-24, 24-28, and 28-32 Hz, which is supported by [36], where the high beta band (21)(22)(23)(24)(25)(26)(27)(28)(29) shows the best odor-induced emotion classification performance. Furthermore, the frontal-temporal lobe is found to be related to emotional valence processing [57]- [59]. Our results indicate relevance at the frontal-temporal region across multiple frequency bands. ...
... The plot shows relevance in frontal-temporal channels across a wide range of frequencies, mainly in the beta and gamma bands. The frontal-temporal lobe is reported to be related to emotional valence processing [57]- [59], and sub-bands of the beta and gamma band are having high relevance in the left frontal-temporal region. Additionally, frontal gamma-band is found to be related to emotion processing [54], [55], while the beta band was found to contribute to odor-induced emotion classification [36]. ...
Article
Full-text available
The olfactory system enables humans to smell different odors, which are closely related to emotions. The high temporal resolution and non-invasiveness of Electroencephalogram (EEG) make it suitable to objectively study human preferences for odors. Effectively learning the temporal dynamics and spatial information from EEG is crucial for detecting odor-induced emotional valence. In this paper, we propose a deep learning architecture called Temporal Attention with Spatial Autoencoder Network (TASA) for predicting odor-induced emotions using EEG. TASA consists of a filter-bank layer, a spatial encoder, a time segmentation layer, a Long Short-Term Memory (LSTM) module, a multi-head self-attention (MSA) layer, and a fully connected layer. We improved upon the previous work by utilizing a two-phase learning framework, using the autoencoder module to learn the spatial information among electrodes by reconstructing the given input with a latent representation in the spatial dimension, which aims to minimize information loss compared to spatial filtering with CNN. The second improvement is inspired by the continuous nature of the olfactory process; we propose to use LSTM-MSA in TASA to capture its temporal dynamics by learning the intercorrelation among the time segments of the EEG. TASA is evaluated on an existing olfactory EEG dataset and compared with several existing deep learning architectures to demonstrate its effectiveness in predicting olfactory-triggered emotional responses. Interpretability analyses with DeepLIFT also suggest that TASA learns spatial-spectral features that are relevant to olfactory-induced emotion recognition.
... The recognizability of various emotions depends on how effectively it is possible to map the EEG features to the emotion representation selected. The current auditory braincomputer interface study concludes that it is better to train with visual input to increase or decrease the sensorimotor rhythm amplitude than with auditory feedback [35]. This is not linked to the recognition of feelings, although it is noted in the debate that a less evolved sense of hearing can be present in healthy individuals without eye problems [36]. ...
... Visual stimuli may be easier to recognize from brain impulses than audio stimuli since the visual sense is more developed. Following this logic, a combined attempt to evoke an emotion from both visual and auditory input can offer the optimal atmosphere for recognition of emotions [35]. In this paper the DEAP dataset [37] has been used which is available online. ...
... Later many entropy generalizations were formulated and effectively employed for various EEG-based medical research, including mental illnesses, epilepsy [16][17][18], Alzheimer's [19][20][21], autism, and depression [22,23], among others. Considering these outcomes, entropy measures were employed in studying emotion recognition from EEG signals [24][25][26][27]. ...
Article
Full-text available
Human emotion recognition remains a challenging and prominent issue, situated at the convergence of diverse fields, such as brain–computer interfaces, neuroscience, and psychology. This study utilizes an EEG data set for investigating human emotion, presenting novel findings and a refined approach for EEG-based emotion detection. Tsallis entropy features, computed for q values of 2, 3, and 4, are extracted from signal bands, including theta-θ (4–7 Hz), alpha-α (8–15 Hz), beta-β (16–31 Hz), gamma-γ (32–55 Hz), and the overall frequency range (0–75 Hz). These Tsallis entropy features are employed to train and test a KNN classifier, aiming for accurate identification of two emotional states: positive and negative. In this study, the best average accuracy of 79% and an F-score of 0.81 were achieved in the gamma frequency range for the Tsallis parameter q = 3. In addition, the highest accuracy and F-score of 84% and 0.87 were observed. Notably, superior performance was noted in the anterior and left hemispheres compared to the posterior and right hemispheres in the context of emotion studies. The findings show that the proposed method exhibits enhanced performance, making it a highly competitive alternative to existing techniques. Furthermore, we identify and discuss the shortcomings of the proposed approach, offering valuable insights into potential avenues for improvements.
... Analysis Methodology Limitations: Current analytical approaches suffer from two key limitations that impede their applicability in real-world mental health monitoring [8]. Firstly, these models often lack personalisation [9], opting instead for a one-size-fits-all algorithm. ...
Conference Paper
The rapid advancement of wearable sensors and machine learning technologies has opened new avenues for mental health monitoring. Despite these advancements, conventional approaches often fail to provide an accurate and personalised understanding of an individual's multi-dimensional emotional state. This paper introduces a novel approach for enhanced daily mental health prediction, focusing on nine distinct emotional states. Our method employs a personalised crossmodal transformer architecture that effectively integrates ZCM (Zero Crossing Mode) and PIM (Proportional Integration Mode) physical signals obtained from piezoelectric accelerometers worn on the non-dominant wrist. Utilising this personalised crossmodal transformer model, our approach adaptively focuses on the most pertinent features across these diverse physical signals, thereby offering a more nuanced and individualised assessment of an individual's emotional state. Our experiments show a considerable improvement in performance, achieving a Concordance Correlation Coefficient (CCC) of 0.475 over a baseline of 0.281.
... Emotion recognition plays a vital role in various fields, including healthcare, affective computing, and human-computer interaction [3], [4], [5]. One of the most relevant tasks in the field of emotion recognition is the measurement of valence and arousal dimensions, which represent the positive/negative and low/high activation levels of emotions, respectively. ...
Conference Paper
Personalization is essential in enhancing the performance of machine learning models in brain-computer interfaces (BCIs) for emotion recognition, specifically in valence and arousal classification. In this work, we address the challenge of person-alizing BCI models utilizing a wireless consumer non-invasive electroencephalogram (EEG) device with dry electrodes. Our research investigates the effectiveness of three machine learning algorithms in classifying valence and arousal: k-Nearest Neighbors (k-NNs), Support Vector Machines (SVMs), and Artificial Neural Networks (ANNs). To achieve personalization, we adopt an incremental learning approach by progressively incorporating high-quality subject data during model training that are taken from the ground-truth. We compare the performance of the models before and after personalization. The results demonstrate significant improvements in valence and arousal classification accuracy through personalization, with the personalized models outperforming the non-personalized models by up to 27.82% and 28.80%, respectively.
... EEG Signal Processing & Analysis: Electroencephalogram (EEG) involves sensing non-invasive electrical signals on the scalp. It has been used for tasks as diverse as motorimagery inference [2], emotion recognition [3], and speech comprehension [4], as well as preliminary diagnosis for neurobiological conditions such as cognitive impairment, Parkinson's, schizophrenia, and dementia [5]. Furthermore, several types of neural networks have been designed to help with such diagnosis, including but not limited to Convolutional Neural Networks (CNNs) -i.e., EEGNet [6] -, Long Short-Term Memories (LSTMs), as well as Recurrent Neural Networks (RNNs) [7]. ...
Conference Paper
Full-text available
This study proposes an ultra-low-power processing approach for early seizure detection using electroencephalo-gram (EEG) signal processing. Traditional methods of EEG processing are power-hungry and resource-intensive, making them impractical for long-term, real-time monitoring. We propose a two-tier wireless sensor network architecture with all-analog in-situ processing at the sensor level and Cluster-Heads (CHs) to compile the data into meaningful information. The paper introduces a novel all-analog Convolutional Processing Unit (CvPU) that utilizes anisotropic diffusion properties in electrical circuits and a neural-network architecture for EEG-based seizure detection. The power consumption of the proposed architecture is estimated to be 1 to 3 orders of magnitude lower than contemporary digital and hybrid analog-digital approaches. The proposed approach is cost-effective, generates a comprehensive picture of an individual's brain health, and can be applied in other physiological monitoring areas, including athletic fitness assessments and personal-health monitoring.
... What would be good electrode placements for emotion recognition while utilizing only a few electrodes is also fascinating. Bos et al. [48] selected the left mastoid for ground, F3/F4 for valence recognition, and Fpz/right mastoid for arousal recognition. Her findings suggested that F3 and F4 are the best electrode placements for identifying emotional valence. ...
Article
Affective computing, which focuses on identifying emotions from physiological data, namely electroencephalography (EEG) is becoming increasingly significant. However, direct analysis of EEG is highly challenging due to its nonlinear and nonstationary character. Various EEG rhythms provide a reliable method for the automatic recognition of emotions. Therefore, an integrated eigenvector centrality- variational nonlinear chirp mode decomposition-based EEG rhythm separation(EVNCERS) is developed. For selecting the dominant channels, the Eigenvector Centrality Method is used followed by variational nonlinear chirp mode decomposition to retrieve the instantaneous frequency (IF) and instantaneous amplitude (IA) from EEG signals. The equivalent IA and IF are used to create the delta, theta, alpha, beta, and gamma rhythms. The rhythms are analyzed over several entropy-based features, chosen using statistical analysis [mean and standard deviation (STD)], then categorized using various machine-learning methods. The proposed EVNCERS has achieved the highest performance of accuracy and F1-score for arousal: (96.86%, 97.98%), valence: (96.87%,97.82%), and dominance:(96.81%,97.71%) using random rotation forest classifier. The performance revealed that the delta rhythm offered more insight into automatic emotion recognition. The DREAMER dataset results demonstrate that our model has the highest predictive ability, with AUC values of 0.98 for arousal and dominance and 0.99 for the valence category. The SEED dataset also shows a similar trend, with the delta rhythm achieving the highest accuracy of 87.25% and F1-score of 74.54%. The proposed EVNCERS model can help in real-time situations to automatically recognize affective emotions, which would give us a greater range of emotional states.
... On the other hand, arousal represents the intensity of emotion and how reactive or proactive someone is to a stimulus. Since relaxation is associated with alpha waves, while alertness is linked to beta waves, researchers have found that the ratio of beta and alpha measured in the frontal lobe (F3 and F4) can be an expression of a person's arousal [18,72,73]: ...
Article
Full-text available
In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ and visitors’ engagement suggest that the real affective state cannot be fully assessed with self-assessment techniques and that other physiological techniques, such as EEG, should be adopted to gain a more unbiased and mature understanding of their feelings. With the aim of contributing to bridging this knowledge gap, this work proposes to adopt literature EEG-based indicators (valence, arousal, engagement) to analyze the affective state of 95 visitors interacting physically or virtually (in a VR environment) with five handicraft objects belonging to the permanent collection of the Museo dell’Artigianato Valdostano di Tradizione, which is a traditional craftsmanship museum in the Valle d’Aosta region. Extreme Gradient Boosting (XGBoost) was adopted to classify the obtained engagement measures, which were labeled according to questionnaire replies. EEG analysis played a fundamental role in understanding the cognitive and emotional processes underlying immersive experiences, highlighting the potential of VR technologies in enhancing participants’ cognitive engagement. The results indicate that EEG-based indicators have common trends with self-assessment, suggesting that their use as ‘the ground truth of emotion’ is a viable option.
... The accelerated learning method was developed in the 1960's by Luznov. It included the use of both conscious and subconscious auditory and visual variables (such as baroque music, subliminal messages, breathing exercises, and a calm environment), which evokes, in turn, a physical response that strives for harmony between the various body systems: heart rate, breathing, and brain activity in alpha wave frequency (Bos, 2006;Lozanov, 2005). ...