Positive and negative emotions.

Positive and negative emotions.

Source publication
Article
Full-text available
The purpose of this research is to emphasize the importance of mental health and contribute to the overall well-being of humankind by detecting stress. Stress is a state of strain, whether it be mental or physical. It can result from anything that frustrates, incenses, or unnerves you in an event or thinking. Your body’s response to a demand or cha...

Context in source publication

Context 1
... emotions such as happiness, joy, love, pride, and pleasure can have a positive effect, such as improving daily work performance, and negative emotions such as anger, terrible, sad and, disgust can have a negative impact on the health of a person. Positive and negative emotions are represented in Figure 2. Emotional signs such as depression, terrible, unhappiness, anxiety, agitation, and anger are responsible for stress. ...

Similar publications

Chapter
Full-text available
This study focuses on the relationship between fandom and well-being. Fandom refers to the act of endorsing something, whether it is a person or a nonperson such as an artist, game character, or specific brand. In other words, fandom can be seen as the ultimate customer engagement. Previous study has suggested that customer-brand engagement contrib...

Citations

... Similarly,Tariq et al. (2019) developed a 2D-CNN to detect seven basic emotions (calm, happy, sad, angry, fearful, disgust, and surprise) in the speech of elderly patients, with an overall accuracy of up to 95%. In related research, multimodal approaches using cascaded LSTM recurrent neural networks (RNN) were used byGupta et al. (2022) to detect stressed and unstressed states of test subjects with an accuracy of 91%.Schuller et al. (2020) demonstrated the effectiveness of employing acoustic and prosodic features in a CNN and LSTM RNN approach to detect arousal and valence from older adults' speech samples with 72% accuracy. ...
Article
Full-text available
The research presents the development and test of a machine learning (ML) model to assess the subjective well-being of older adults based solely on natural speech. The use of such technologies can have a positive impact on healthcare delivery: the proposed ML model is patient-centric and securely uses user-generated data to provide sustainable value not only in the healthcare context but also to address the global challenge of demographic change, especially with respect to healthy aging. The developed model unobtrusively analyzes the vocal characteristics of older adults by utilizing natural language processing but without using speech recognition capabilities and adhering to the highest privacy standards. It is based on theories of subjective well-being, acoustic phonetics, and prosodic theories. The ML models were trained with voice data from volunteer participants and calibrated through the World Health Organization Quality of Life Questionnaire (WHOQOL), a widely accepted tool for assessing the subjective well-being of human beings. Using WHOQOL scores as a proxy, the developed model provides accurate numerical estimates of individuals’ subjective well-being. Different models were tested and compared. The regression model proves beneficial for detecting unexpected shifts in subjective well-being, whereas the support vector regression model performed best and achieved a mean absolute error of 10.90 with a standard deviation of 2.17. The results enhance the understanding of the subconscious information conveyed through natural speech. This offers multiple applications in healthcare and aging, as well as new ways to collect, analyze, and interpret self-reported user data. Practitioners can use these insights to develop a wealth of innovative products and services to help seniors maintain their independence longer, and physicians can gain much greater insight into changes in their patients’ subjective well-being.
... With the advent of the deep learning method, significant advancements have been made in audio-visual action recognition. Researchers have employed convolutional neural networks (CNNs) to learn spatial features from video frames and recurrent neural networks (RNNs) and long short-term memory (LSTM) to capture temporal patterns [20,21] in audio and visual data. Various strategies have also been proposed to fuse the features, including concatenation, element-wise summation, and attention mechanisms [22,23]. ...
Article
Full-text available
Our approach to action recognition is grounded in the intrinsic coexistence of and complementary relationship between audio and visual information in videos. Going beyond the traditional emphasis on visual features, we propose a transformer-based network that integrates both audio and visual data as inputs. This network is designed to accept and process spatial, temporal, and audio modalities. Features from each modality are extracted using a single Swin Transformer, originally devised for still images. Subsequently, these extracted features from spatial, temporal, and audio data are adeptly combined using a novel modal fusion module (MFM). Our transformer-based network effectively fuses these three modalities, resulting in a robust solution for action recognition.
... In addition, deep-learning architectures have emerged as powerful tools for timebased prediction problems. Recurrent Neural Networks (RNNs) and convolutional neural networks (CNNs) have gained attraction in well-being research [9][10][11][12][13][14]. RNNs, particularly LSTM variants, have demonstrated their ability to capture long-term dependencies and temporal dynamics in well-being-related data, while CNNs extract hierarchical features from wearable sensor data, enabling more accurate prediction. ...
Article
Full-text available
Wearable devices have become ubiquitous, collecting rich temporal data that offers valuable insights into human activities, health monitoring, and behavior analysis. Leveraging these data, researchers have developed innovative approaches to classify and predict time-based patterns and events in human life. Time-based techniques allow the capture of intricate temporal dependencies, which is the nature of the data coming from wearable devices. This paper focuses on predicting well-being factors, such as stress, anxiety, and positive and negative affect, on the Tesserae dataset collected from office workers. We examine the performance of different methodologies, including deep-learning architectures, LSTM, ensemble techniques, Random Forest (RF), and XGBoost, and compare their performances for time-based and non-time-based versions. In time-based versions, we investigate the effect of previous records of well-being factors on the upcoming ones. The overall results show that time-based LSTM performs the best among conventional (non-time-based) RF, XGBoost, and LSTM. The performance even increases when we consider a more extended previous period, in this case, 3 past-days rather than 1 past-day to predict the next day. Furthermore, we explore the corresponding biomarkers for each well-being factor using feature ranking. The obtained rankings are compatible with the psychological literature. In this work, we validated them based on device measurements rather than subjective survey responses.
... In addition, deep learning architectures have emerged as powerful tools for time-based prediction problems. Recurrent Neural Networks (RNNs), and convolutional neural networks (CNNs), have gained attraction in well-being research [9][10][11][12][13][14]. RNNs, particularly LSTM variants, have demonstrated their ability to capture long-term dependencies and temporal dynamics in well-being-related data, while CNNs extract hierarchical features from wearable sensor data, enabling more accurate prediction. ...
Preprint
Full-text available
Wearable devices have become ubiquitous, collecting rich temporal data that offers valuable insights into human activities, health monitoring, and behavior analysis. Leveraging this data, researchers have developed innovative approaches to classify and predict time-based patterns and events in human life. Time-based techniques allow the capture of intricate temporal dependencies, which is the nature of the data coming from wearable devices. This paper focuses on predicting well-being factors, such as stress, anxiety, positive and negative affect, on the Tesserae dataset collected from office workers. We examine the performance of different methodologies, including deep learning architectures, LSTM, ensemble techniques, Random Forest (RF) and XGBoost and compare their performances for time-based and non-time-based versions. In time-based versions, we investigate the effect of previous records of well-being factors on the upcoming ones. The overall results show that time-based LSTM performs the best among conventional (non-time-based) RF, XGBoost, and LSTM. The performance even increases when we consider a more extended previous period, in this case, 3 past-days rather than 1 past-day to predict the next day. Furthermore, we explore the corresponding biomarkers for each well-being factor using feature ranking. The obtained rankings are compatible with the psychological literature. In this work, we validated them based on device measurements rather than subjective survey responses.
... Government organizations could analyze public opinion on various topics [1,2]. There is much interest in academic and business fields [3,4] could improve their products based on fast feedback from customers, monitoring stress issues [5], security and forensics [6], disaster management and mitigation [7,8] in finding techniques to harvest the sentiment or opinion information contained in bulk social media feeds, because the internet generates more than terabytes of (approx. 328.77 million terabytes each day) data and motivates us to use it for the betterment of intelligent and healthy cities [3,7]. ...
... Furthermore, the embedded input feed to the Hybrid-GFX-Attention-BiGRU-CNN language model is based on Eqs. (4)(5)(6)(7)(8)(9)(10)(11)(12)(13). The model's performance is evaluated as evaluation metrics and interpretation of class prediction. ...
Article
Full-text available
Policies, legislation, surveillance, monitoring, direction, and enforcement, are heavily influenced by public opinion or emotion. Due to the increase in electronic data generation, it has been forced to do an automatic analysis of this opinion or feelings termed as opinion analysis. To process massive volumes of data, deep learning is now trending. Word embeddings serve an essential role of feature representatives in deep understanding. The present paper offers a novel deep learning architecture that represents hybrid embedding that deals with polysemy, semantic, and syntactic problems in a language representation. The effectiveness of a deep learning model is extremely sensitive to using hyperparameters. Here, the proposed a novel Hybrid-GFX–Attention–BiGRU–CNN model with a hyperband language model. Hyperband search is used to find optimal values for the model's hyperparameters. To justify classification results, statistical and graphical approaches have been used. We analyzed the model's efficacy using the MR and Hate speech data sets. The model’s performance is quite promising compared with existing state-of-the-art architectures.
... Results indicated a negative correlation (p ≤ 0.009) between SPPB and POMA-G scores and LyE in specific PMs, suggesting that increased walking instability is associated with higher fall risk. • Gupta et al. study [18] aimed to detect and address stress, which is a significant factor affecting mental health and overall well-being. In this study, a novel approach utilizing audio-visual data processing is proposed to detect human mental stress. ...
Article
Full-text available
In recent years, the integration of Machine Learning (ML) techniques in the field of healthcare and public health has emerged as a powerful tool for improving decision-making processes [...]