Fig 10 - uploaded by Snehanshu Saha
Content may be subject to copyright.
3 Classification of locked-in syndrome (LIS)

3 Classification of locked-in syndrome (LIS)

Source publication
Article
Full-text available
EEG based brain computer interface has emerged as a hot spot in the study of neuroscience, machine learning and rehabilitation in the recent years. A BCI provides a platform for direct communication between a human brain and a computer without the normal neurophysiology pathways. The electrical signals in the brain, because of their fast response t...

Similar publications

Article
Full-text available
Brain–computer interface (BCI) researchers have shown increasing interest in soliciting user experience (UX) feedback, but the severe speech and physical impairments (SSPI) of potential users create barriers to effective implementation with existing feedback instruments. This article describes augmentative and alternative communication (AAC)-based...

Citations

... Practicality research studied imagined speech in EEG-based BCI systems and showed that imagined speech could be extrapolated using texts with high discriminatory pronunciation [33]. Hence, BCI-based gear can be controlled by processing brain signals and extrapolating the inner speech [34]. Extensive research has been conducted to develop BCI systems using inner speech and motor imagery [35]. ...
... While much previous BCI work has focused on movement recovery, another critical area for BCI development is speech restoration for treatment of aphasia, which is commonly associated with stroke. Similar to work for motor decoding, early literature in speech decoding originated with non-invasive approaches primarily through use of the P300 event-related potential using EEG while the participant focused on a specific letter within a grid of rows and columns [67,68]. In recent years, many groups have also trialed invasive BCIs for speech decoding. ...
... Practical research studied imagined speech in EEG-based BCI systems and showed that imagined speech could be extrapolated using texts with high discriminatory pronunciation [33]. Hence, BCI-based gear can be controlled by processing brain signals and extrapolating inner speech [34]. Extensive research has been conducted to develop BCI systems using inner speech and motor imagery [35]. ...
Article
Full-text available
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain–computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
... Practicality research studied imagined speech in EEG-based BCI systems and showed that imagined speech could be extrapolated using texts with high discriminatory pronunciation [33]. Hence, BCI-based gear can be controlled by processing brain signals and extrapolating the inner speech [34]. Extensive research has been conducted to develop BCI systems using inner speech and motor imagery [35]. ...
Preprint
Full-text available
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel Electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer number of sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: Up, Down, Left, and Right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration has been implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based Brain-Computer Interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50% and 92.62% for precision, recall, and F1-score, respectively.
... Practicality research studied imagined speech in EEG-based BCI systems and showed that imagined speech could be extrapolated using texts with high discriminatory pronunciation [33]. Hence, BCI-based gear can be controlled by processing brain signals and extrapolating the inner speech [34]. Extensive research has been conducted to develop BCI systems using inner speech and motor imagery [35]. ...
Preprint
Full-text available
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel Electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer number of sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: Up, Down, Left, and Right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration has been implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based Brain-Computer Interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50% and 92.62% for precision, recall, and F1-score, respectively.
... Practicality research studied imagined speech in EEG-based BCI systems and showed that imagined speech could be extrapolated using texts with high discriminatory pronunciation [33]. Hence, BCI-based gear can be controlled by processing brain signals and extrapolating the inner speech [34]. Extensive research has been conducted to develop BCI systems using inner speech and motor imagery [35]. ...
Preprint
Full-text available
In this paper, we propose an inner speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel Electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer number of sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The Long-Short Term Memory Recurrent Neural Network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: Up, Down, Left, and Right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration has been implemented for each individual command in the EEG datasets. The proposed inner speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy inner speech-based Brain-Computer Interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50% and 92.62% for precision, recall, and F1-score, respectively.
... e availability of noninvasive EEG devices for measuring speech neural activity in the brain and advanced deep learning techniques has contributed to the development of the imagined speech-based BCI, which is expected to be the imminent verbal communication alternative for speech-disordered individuals. e imagined speech EEG-based BCI system decodes or translates the subject's imaginary speech signals from the brain into messages for communication with others or machine recognition instructions for machine control [6]. Decoding imagined speech from brain signals to benefit humanity is one of the most appealing research areas. ...
... e one-dimensional feature learning vectors of TCN and CNN were then concatenated. e activation function exponential linear unit (ELU) defined in equation (6) was applied to the combined feature vector [40]. e combined feature vector was used in the dense or fully connected layers for the feature information transformation Advances in Human-Computer Interaction or extraction, and the extracted information was used in the multiclass speech classification. ...
Article
Full-text available
The paper’s emphasis is on the imagined speech decoding of electroencephalography (EEG) neural signals of individuals in accordance with the expansion of the brain-computer interface to encompass individuals with speech problems encountering communication challenges. Decoding an individual’s imagined speech from nonstationary and nonlinear EEG neural signals is a complex task. Related research work in the field of imagined speech has revealed that imagined speech decoding performance and accuracy require attention to further improve. The evolution of deep learning technology increases the likelihood of decoding imagined speech from EEG signals with enhanced performance. We proposed a novel supervised deep learning model that combined the temporal convolutional networks and the convolutional neural networks with the intent of retrieving information from the EEG signals. The experiment was carried out using an open-access dataset of fifteen subjects’ imagined speech multichannel signals of vowels and words. The raw multichannel EEG signals of multiple subjects were processed using discrete wavelet transformation technique. The model was trained and evaluated using the preprocessed signals, and the model hyperparameters were adjusted to achieve higher accuracy in the classification of imagined speech. The experiment results demonstrated that the multiclass imagined speech classification of the proposed model exhibited a higher overall accuracy of 0.9649 and a classification error rate of 0.0350. The results of the study indicate that individuals with speech difficulties might well be able to leverage a noninvasive EEG-based imagined speech brain-computer interface system as one of the long-term alternative artificial verbal communication mediums.
... It is this problem that has inspired interest in developing non-muscle communication channels for paralyzed individuals [24]. This promising field is now known as Brain-Computer Interfaces (BCIs) technology [8,13,19,24,43,86,56,61,80]. A case has been described where a patient with Amyotrophic lateral sclerosis (ALS) was able to communicate at the stage of complete muscle paralysis using such a neural interface [87]. ...
Article
Although a significant number of studies have been devoted to the investigation of the electrographic correlates and neurophysiological mechanisms of spoken and inner (imagined) speech, there is a question on which EEG characteristics reflect its content. Considering that speech is a complex cognitive process which requires coordinated activity of a number of cortical structures of the large hemispheres, the EEG coherence values were studied. The values were recorded from 14 channels of 10 young men in the process of real verbalization (spoken speech) and during pronunciation of imagined words designating directions in space (up, down, right, left, forward, backward). It was shown that the level of EEG coherence is generally higher for real verbalization, most significantly at gamma-2-rhythm frequencies (55–70 Hz). Spatial coherence patterns specific to a number of words are formed in the left cerebral hemisphere during imagined utterance of words at gamma-2 frequencies. The application of machine learning and neural network classification has demonstrated a significant similarity of the generated spatial coherent patterns of spoken and inner (imagined) speech. The Multi-layer Perceptron (MLP) neural network classification method has shown the accuracy of word detection in the imagined speech according to brain activity patterns up to 49–61% for 3 classes and 33–40% for 7 classes, with a random recognition rate of 33,3 and 14,2% respectively. The latter indicates a promising application of coherence values and imagined speech denoting directions in space for the development of Brain-computer interfaces (BCIs).
... As for assistive purposes, EEG permits disabled people to communicate their opinions and ideas via a variety of methods such as spelling applications (Birbaumer and Cohen, 2007;Akcakaya et al., 2014;Birbaumer et al., 2014;Rezeika et al., 2018), semantic categorization (Stothart et al., 2017), or silent speech communication (Brumberg et al., 2010;Mohanchandra et al., 2015). This may facilitate advanced hands-free applications, which may provide disabled people ease and comfort. ...
Article
Full-text available
The deployment of electroencephalographic techniques for commercial applications has undergone a rapid growth in recent decades. As they continue to expand in the consumer markets as suitable techniques for monitoring the brain activity, their transformative potential necessitates equally significant ethical inquiries. One of the main questions, which arises then when evaluating these kinds of applications, is whether they should be aligned or not with the main ethical concerns reported by scholars and experts. Thus, the present work attempts to unify these disciplines of knowledge by performing a comprehensive scan of the major electroencephalographic market applications as well as their most relevant ethical concerns arising from the existing literature. In this literature review, different databases were consulted, which presented conceptual and empirical discussions and findings about commercial and ethical aspects of electroencephalography. Subsequently, the content was extracted from the articles and the main conclusions were presented. Finally, an external assessment of the outcomes was conducted in consultation with an expert panel in some of the topic areas such as biomedical engineering, biomechatronics, and neuroscience. The ultimate purpose of this review is to provide a genuine insight into the cutting-edge practical attempts at electroencephalography. By the same token, it seeks to highlight the overlap between the market needs and the ethical standards that should govern the deployment of electroencephalographic consumer-grade solutions, providing a practical approach that overcomes the engineering myopia of certain ethical discussions.
... Among these phases, speech imagination finds potential application as a BCI control [19,20]. Imagined speech BCI is highly advantageous for individuals who are unable to move their articulators because of physical disabilities such as lockedin syndrome, or advanced amyotrophic lateral sclerosis [21]. ...
Preprint
Translation of imagined speech electroencephalogram(EEG) into human understandable commands greatly facilitates the design of naturalistic brain computer interfaces. To achieve improved imagined speech unit classification, this work aims to profit from the parallel information contained in multi-phasal EEG data recorded while speaking, imagining and performing articulatory movements corresponding to specific speech units. A bi-phase common representation learning module using neural networks is designed to model the correlation and reproducibility between an analysis phase and a support phase. The trained Correlation Network is then employed to extract discriminative features of the analysis phase. These features are further classified into five binary phonological categories using machine learning models such as Gaussian mixture based hidden Markov model and deep neural networks. The proposed approach further handles the non-availability of multi-phasal data during decoding. Topographic visualizations along with result-based inferences suggest that the multi-phasal correlation modelling approach proposed in the paper enhances imagined-speech EEG recognition performance.