Block diagram of Hybrid Brain-Computer Interface (BCI).

Block diagram of Hybrid Brain-Computer Interface (BCI).

Source publication
Article
Full-text available
A Brain-Computer Interface (BCI) acts as a communication mechanism using brain signals to control external devices. The generation of such signals is sometimes independent of the nervous system, such as in Passive BCI. This is majorly beneficial for those who have severe motor disabilities. Traditional BCI systems have been dependent only on brain...

Contexts in source publication

Context 1
... systems use data fusion techniques and use machine-learning algorithms for the fusion of complementary signals. This technique is termed a Hybrid BCI, as demonstrated in Figure 1. Any Hybrid BCI system must fulfil four major criteria, that are as follows [42,43] 1. ...
Context 2
... biofeedback based system is used to extract features such as attention, intention and focus. Figure 10b shows the actual workflow. The task of the experiment was to grasp a glass of water. ...
Context 3
... task of the experiment was to grasp a glass of water. System Design: NAO humanoid is used along with BCI system that includes a bio-signal amplifier which is used to convert the user's brain signals into digital form and a tracker which tracks the location of the focus of user's eye as shown in Figure 10a. Components of the System are as follows: ...
Context 4
... the online part, the authors evaluated the experiments in terms of: (i) Performance (accuracy and response time), (ii) task execution (this method has been extensively used in other case studies as well for evaluation, in which the user is asked to perform a set of tasks on the robot), and (iii) workload (to measure qualitative parameters). Figure 11a shows the two-level hierarchical menu displayed on the user screen to allow them to control the interface, as shown in Figure 11b. All the similar tasks are grouped in the two-level interface under a category. ...
Context 5
... the online part, the authors evaluated the experiments in terms of: (i) Performance (accuracy and response time), (ii) task execution (this method has been extensively used in other case studies as well for evaluation, in which the user is asked to perform a set of tasks on the robot), and (iii) workload (to measure qualitative parameters). Figure 11a shows the two-level hierarchical menu displayed on the user screen to allow them to control the interface, as shown in Figure 11b. All the similar tasks are grouped in the two-level interface under a category. ...
Context 6
... the selection of the task was made, the execution was done by teeth clenching movement. All the categories and one of the task used in [73] along with the transitions are shown in Figure 11b. ...
Context 7
... Design: The experiment [75] is divided into five phases, as shown in Figure 12. Major characteristics of these phases are listed below: ...
Context 8
... markers were placed on the HMD and user arms which helps in performing the automated phases. As shown in Figure 13a, SSVEP was evoked by flickering the body parts, which was used for body part selection by the user. g.USBamp was used to acquire the data with a sampling rate of 256 Hz combined with band-pass filter (0.5-30 Hz) and notch filter (50 Hz). ...
Context 9
... SSVEP was evoked during the interaction selection phase as well. Finally, as shown in Figure 13b the robot adjusted itself by small steps. The robot initiated the action when it reached a comfortable pose. ...

Similar publications

Preprint
Full-text available
A Brain-Computer Interface (BCI) allows its user to interact with a computer or other machines by only using their brain activity. People with motor disabilities are potential users of this technology since it could allow them to interact with their surroundings without using their peripheral nerves, helping them regain their lost autonomy. The P30...

Citations

... Advancements in prosthetics have allowed for the brain to control external applications through brain-computer interface (BCI) devices, BCIs acquire signals from the brain and then these signals are processed and analysed to be able to be translated into commands that can be used to control external sources or carry out a specific action such as hand grasp actions [1], [2]. BCI devices are used in a multitude of different applications such as prosthetics and humanoid robotics [3], [4]. A study by Zhang expresses the difficulties of using BCI devices and when implemented in specific applications such as soft robotics; BCI devices have difficulty to deliver commands that robotic applications require in multitask control scenarios [5]. ...
Conference Paper
Full-text available
Prosthetic hands allow restoring functions back to an amputee who does not have full control over their muscles or joints. A Brain-Computer Interface (BCI) can increase the control accuracy of the individual's upper limb prosthetic. Two non-invasive methods are based on electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) pathways. These pathways can be integrated with surface electromyography (sEMG) to measure and obtain electrical activity produced by skeletal muscles. In this paper, an fNIRS-based approach is investigated to determine the suitability for prosthetic limb control as a substitute for EEG-based prostheses in real-world integrations with a discussion on the overcoming limitations of EEG. A systematic and comprehensive state-of-the-art review has been conducted from reputed databases like PubMed and Web of Science from January 2017 to March 2024 using relevant keywords. Both techniques were investigated in this study, including hybrid implementations between the technologies and sEMG. The results of this review show that vast research has been conducted around EEG-based paradigms and BCI, however fNIRS demonstrates more promise in research offering significant advantages over EEG in real-world applications. Moreover, hybrid control systems based on fNIRS/sEMG and EEG/sEMG increase the quality of the control procedure as sEMG alone cannot provide complete information to control complex prosthetics. Finally, it is identified that there are limited studies focusing on hybrid fNIRS/sEMG implementation thus highlighting a research gap in BCI studies.
... Electroencephalography (EEG) is widely utilized as a brain signal in the development of BCI-controlled robotic systems, primarily due to its non-invasive, high temporal resolution, and user-friendly characteristics (Gao et al. 2014;Chamola et al. 2020;Chen et al. 2020;Tonin and Millan 2021). For example, Bell et al. developed a navigation system for controlling a humanoid robot, which utilized a P300-BCI to instruct the robot to retrieve a desired object and transport it to a specific location (Bell et al. 2008). ...
Article
Full-text available
Brain-computer interface (BCI)-based robot combines BCI and robotics technology to realize the brain’s intention to control the robot, which not only opens up a new way for the daily care of the disabled individuals, but also provides a new way of communication for normal people. However, the existing systems still have shortcomings in many aspects such as friendliness of human–computer interaction, and interaction efficient. This study developed a humanoid robot control system by integrating an augmented reality (AR)-based BCI with a simultaneous localization and mapping (SLAM)-based scheme for autonomous indoor navigation. An 8-target steady-state visual evoked potential (SSVEP)-based BCI was implemented to enable direct control of the humanoid robot by the user. A Microsoft HoloLens was utilized to display visual stimuli for eliciting SSVEPs. Filter bank canonical correlation analysis (FBCCA), a training-free method, was used to detect SSVEPs in this study. By leveraging SLAM technology, the proposed system alleviates the need for frequent control commands transmission from the user, thereby effectively reducing their workload. Online results from 12 healthy subjects showed this developed BCI system was able to select a command out of eight potential targets with an average accuracy of 94.79%. The autonomous navigation subsystem enabled the humanoid robot to autonomously navigate to a destination chosen utilizing the proposed BCI. Furthermore, all participants successfully completed the experimental task using the developed system without any prior training. These findings illustrate the feasibility of the developed system and its potential to contribute novel insights into humanoid robots control strategies.
... Recent advances in BCIs have demonstrated the efficacy of translating recorded EEG signals into actions that represent users' intentions. Successful examples of BCIs include EEG-speller systems [4][5][6][7], wheelchair control [8,9], upper-and lower-limb prosthetics control [10][11][12], robot control [13,14], and brain-controlled games [15]. In addition, BCI has also been demonstrated to represent a novel human-computer interaction technology that is not limited only to people with disabilities [16][17][18]. ...
Article
Full-text available
Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and inter-subject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use.
... Brain-computer interface (BCI) is a technology that can transform the neural activity of the brain into the user's intended output or mental activity, enabling direct communication and control between the brain and the external world without relying on human nerves and muscle tissue (Vinay et al. 2020;Jelena et al. 2021). As the primary source of sensory input for human beings, the visual information plays a crucial role in the development of BCI technology within the realm of communication restoration. ...
Article
Full-text available
In visual-imagery-based brain–computer interface (VI-BCI), there are problems of singleness of imagination task and insufficient description of feature information, which seriously hinder the development and application of VI-BCI technology in the field of restoring communication. In this paper, we design and optimize a multi-character classification scheme based on electroencephalogram (EEG) signals of visual imagery (VI), which is used to classify 29 characters including 26 lowercase English letters and three punctuation marks. Firstly, a new paradigm of randomly presenting characters and including preparation stage is designed to acquire EEG signals and construct a multi-character dataset, which can eliminate the influence between VI tasks. Secondly, tensor data is obtained by the Morlet wavelet transform, and a feature extraction algorithm based on tensor—uncorrelated multilinear principal component analysis is used to extract high-quality features. Finally, three classifiers, namely support vector machine, K-nearest neighbor, and extreme learning machine, are employed for classifying multi-character, and the results are compared. The experimental results demonstrate that, the proposed scheme effectively extracts character features with minimal redundancy, weak correlation, and strong representation capability, and successfully achieves an average classification accuracy 97.59% for 29 characters, surpassing existing research in terms of both accuracy and quantity of classification. The present study designs a new paradigm for acquiring EEG signals of VI, and combines the Morlet wavelet transform and UMPCA algorithm to extract the character features, enabling multi-character classification in various classifiers. This research paves a novel pathway for establishing direct brain-to-world communication.
... Nevertheless, the accuracy of these systems has been significantly improved with the recent integration of machine learning-based translation algorithms and multi-sensor data fusion. This is achieved by including telepresence, object grasping, and navigation, with the help of multisensor fusion and machine learning techniques to control humanoid robots, as indicated in Figure 5 [173]. and finite element method (FEM). ...
... Nevertheless, the accuracy of these systems has been significantly improved with the recent integration of machine learning-based translation algorithms and multi-sensor data fusion. This is achieved by including telepresence, object grasping, and navigation, with the help of multi-sensor fusion and machine learning techniques to control humanoid robots, as indicated in Figure 5 [173]. In 2020, Y. Boon et al. proposed using machine learning for several aspects of structural component design. ...
... Block diagram of BCI[173]. ...
Article
Full-text available
This review focuses on the complex connections between machine learning, mechatronics, and stretch forming, offering valuable insights that can lay the groundwork for future research. It provides an overview of the origins and fundamentals of these fields, emphasizes notable progress, and explores the influence of these fields on society and industry. Also highlighted is the progress of robotics research and particularities in the field of sheet metal forming and its various applications. This review paper focuses on presenting the latest technological advancements and the integrations of these fields from their beginnings to the present days, providing insights into future research directions.
... The closer the feedback is to the event the more likely change is to occur (Kumar et al., 2019). The use of fNIRS and other forms of non-invasive measurement devices such as in combination with machine learning classification tools such as artificial neural networks (ANNs) can provide real-time data on a client's cognitive state and their level of skill development within milliseconds (Chamola et al., 2020). This allows for semi-instantaneous / instantaneous adjustments of the DCE and rapid selection of specific mental health content for the client. ...
... These devices typically rely on the interpretation of electrophysiological brain signals, commonly captured using techniques such as electroencephalography (EEG), electrocorticography (ECoG), and near-infrared spectroscopy (NIRS) [15,16]. Among these techniques, EEG is the most widely practiced for BCI applications [16][17][18]. ...
Chapter
Full-text available
Brain-computer interface (BCI) is an innovative method of integrating technology for healthcare. Utilizing BCI technology allows for direct communication and/or control between the brain and an external device, thereby displacing conventional neuromuscular pathways. The primary goal of BCI in healthcare is to repair or reinstate useful function to people who have impairments caused by neuromuscular disorders (e.g., stroke, amyotrophic lateral sclerosis, spinal cord injury, or cerebral palsy). BCI brings with it technical and usability flaws in addition to its benefits. We present an overview of BCI in this chapter, followed by its applications in the medical sector in diagnosis, rehabilitation, and assistive technology. We also discuss BCI’s strengths and limitations, as well as its future direction.
... This technology has been widely explored in the past few decades. The BCI system has great potential for applications in many fields, including clinical rehabilitation training programs (Bai et al., 2020;Mane et al., 2020;Brusini et al., 2021), typing communication systems (Wolpaw et al., 2002;Milekovic et al., 2018;Zhang et al., 2018;Renton et al., 2019), robotics (Bi et al., 2013;Chamola et al., 2020;Baniqued et al., 2021;, entertainment (Noor et al., 2018;Pradhapan et al., 2018;Wang et al., 2019;Li et al., 2021), and so on. The recording methods of brain activity can be divided into two main categories: invasive and non-invasive (Zhuang et al., 2020). ...
Article
Full-text available
The advance in neuroscience and computer technology over the past decades have made brain-computer interface (BCI) a most promising area of neurorehabilitation and neurophysiology research. Limb motion decoding has gradually become a hot topic in the field of BCI. Decoding neural activity related to limb movement trajectory is considered to be of great help to the development of assistive and rehabilitation strategies for motor-impaired users. Although a variety of decoding methods have been proposed for limb trajectory reconstruction, there does not yet exist a review that covers the performance evaluation of these decoding methods. To alleviate this vacancy, in this paper, we evaluate EEG-based limb trajectory decoding methods regarding their advantages and disadvantages from a variety of perspectives. Specifically, we first introduce the differences in motor execution and motor imagery in limb trajectory reconstruction with different spaces (2D and 3D). Then, we discuss the limb motion trajectory reconstruction methods including experiment paradigm, EEG pre-processing, feature extraction and selection, decoding methods, and result evaluation. Finally, we expound on the open problem and future outlooks.
... For example, P300 and the steady-state visually evoked potential (SSVEP) are based on the "evoked" potential (Chamola et al., 2020). By contrast, motor imagery (MI) is the process by which an individual stimulates a physical reaction via mental stimulation (Pfurtscheller and Neuper, 2006). ...
Article
Full-text available
Emerging brain technologies have significantly transformed human life in recent decades. For instance, the closed-loop brain-computer interface (BCI) is an advanced software-hardware system that interprets electrical signals from neurons, allowing communication with and control of the environment. The system then transmits these signals as controlled commands and provides feedback to the brain to execute specific tasks. This paper analyzes and presents the latest research on closed-loop BCI that utilizes electric/magnetic stimulation, optogenetic, and sonogenetic techniques. These techniques have demonstrated great potential in improving the quality of life for patients suffering from neurodegenerative or psychiatric diseases. We provide a comprehensive and systematic review of research on the modalities of closed-loop BCI in recent decades. To achieve this, the authors used a set of defined criteria to shortlist studies from well-known research databases into categories of brain stimulation techniques. These categories include deep brain stimulation, transcranial magnetic stimulation, transcranial direct-current stimulation, transcranial alternating-current stimulation, and optogenetics. These techniques have been useful in treating a wide range of disorders, such as Alzheimer's and Parkinson's disease, dementia, and depression. In total, 76 studies were shortlisted and analyzed to illustrate how closed-loop BCI can considerably improve, enhance, and restore specific brain functions. The analysis revealed that literature in the area has not adequately covered closed-loop BCI in the context of cognitive neural prosthetics and implanted neural devices. However, the authors demonstrate that the applications of closed-loop BCI are highly beneficial, and the technology is continually evolving to improve the lives of individuals with various ailments, including those with sensory-motor issues or cognitive deficiencies. By utilizing emerging techniques of stimulation, closed-loop BCI can safely improve patients' cognitive and affective skills, resulting in better healthcare outcomes.
... More specifically, voice [44,45] interfaces and augmented/virtual reality(AR/VR) [46,47] interfaces have been attempted in various fields. Brain-computer interfaces(BCIs) [48] using an electroencephalogram(EEG) [49,50] or an electromyogram (EMG) [51,52] are also being studied. However, the user interface types are inflexible if the scope is limited to commercialized MARs. ...
Article
Full-text available
Various meal-assistance robot (MAR) systems are being studied, and several products have already been commercialized to alleviate the imbalance between the rising demand and diminishing supply of meal care services. However, several challenges remain. First, most of these services can serve limited types of western food using a predefined route. Additionally, their spoon or fork sometimes makes it difficult to acquire Asian food that is easy to handle with chopsticks. In addition, their limited user interface, requiring physical contact, makes it difficult for people with severe disabilities to use MARs alone. This paper proposes an MAR system that is suitable for the diet of Asians who use chopsticks. This system uses Mask R-CNN to recognize the food area on the plate and estimates the acquisition points for each side dish. The points become target points for robot motion planning. Depending on which food the user selects, the robot uses chopsticks or a spoon to obtain the food. In addition, a non-contact user interface based on face recognition was developed for users with difficulty physically manipulating the interface. This interface can be operated on the user’s Android OS tablet without the need for a separate dedicated display. A series of experiments verified the proposed system’s effectiveness and feasibility.