Figure - available from: Frontiers in Neuroscience
This content is subject to copyright.
(A) Nasion, (B) right preauricular point, (C) surface rotation for feature detection, and (D) detection of continuities and discontinuities.

(A) Nasion, (B) right preauricular point, (C) surface rotation for feature detection, and (D) detection of continuities and discontinuities.

Source publication
Article
Full-text available
The reconstruction of electrophysiological sources within the brain is sensitive to the constructed head model, which depends on the positioning accuracy of anatomical landmarks known as fiducials. In this work, we propose an algorithm for the automatic detection of fiducial landmarks of EEG electrodes on the 3D human head model. Our proposal combi...

Citations

... Firstly, as each electrode corresponds to specific brain areas, harmonizing signal characteristics from electrodes at different locations under the same mental condition presents challenges. This is due to that the distinct characteristics of EEG signals vary among individuals across different inner states [15], and the electrode signals exhibit unique features at various positions [16], [17]. ...
... This section elucidates the initial regional division discussed in Section III-A, which divides all brain channels into K RDs using the International 10-20 system of EEG electrode arrangement and the approximately structural/functional symmetry between the left and right hemispheres [16]. To achieve this, we propose three hypotheses for multi-channel EEG generation pathways, illustrated in Fig. 2. For convenience, we denote that O ij represents the source channel, and O im represents the target channel within the RD i . ...
Preprint
High-precision acquisition of dense-channel electroencephalogram (EEG) signals is often impeded by the costliness and lack of portability of equipment. In contrast, generating dense-channel EEG signals effectively from sparse channels shows promise and economic viability. However, sparse-channel EEG poses challenges such as reduced spatial resolution, information loss, signal mixing, and heightened susceptibility to noise and interference. To address these challenges, we first theoretically formulate the dense-channel EEG generation problem as by optimizing a set of cross-channel EEG signal generation problems. Then, we propose the YOAS framework for generating dense-channel data from sparse-channel EEG signals. The YOAS totally consists of four sequential stages: Data Preparation, Data Preprocessing, Biased-EEG Generation, and Synthetic EEG Generation. Data Preparation and Preprocessing carefully consider the distribution of EEG electrodes and low signal-to-noise ratio problem of EEG signals. Biased-EEG Generation includes sub-modules of BiasEEGGanFormer and BiasEEGDiffFormer, which facilitate long-term feature extraction with attention and generate signals by combining electrode position alignment with diffusion model, respectively. Synthetic EEG Generation synthesizes the final signals, employing a deduction paradigm for multi-channel EEG generation. Extensive experiments confirmed YOAS's feasibility, efficiency, and theoretical validity, even remarkably enhancing data discernibility. This breakthrough in dense-channel EEG signal generation from sparse-channel data opens new avenues for exploration in EEG signal processing and application.
... The three following lists correspond to the nasion fiducial point (nasion), the left periauricular fiducial point (lpa), and the right periauricular fiducial point (rpa). Those indicate a precise reference for the EEG sensors' position on the head [37]. The conversion to the Pandas data frame gives the flexibility to import only the streams in which the researcher is interested. ...
Article
Full-text available
Collecting data allows researchers to store and analyze important information about activities, events, and situations. Gathering this information can also help us make decisions, control processes, and analyze what happens and when it happens. In fact, a scientific investigation is the way scientists use the scientific method to collect the data and evidence that they plan to analyze. Neuroscience and other related activities are set to collect their own big datasets, but to exploit their full potential, we need ways to standardize, integrate, and synthesize diverse types of data. Although the use of low-cost ElectroEncephaloGraphy (EEG) devices has increased, such as those whose price is below 300 USD, their role in neuroscience research activities has not been well supported; there are weaknesses in collecting the data and information. The primary objective of this paper was to describe a tool for data management and visualization, called MuseStudio, for low-cost devices; specifically, our tool is related to the Muse brain-sensing headband, a personal meditation assistant with additional possibilities. MuseStudio was developed in Python following the best practices in data analysis and is fully compatible with the Brain Imaging Data Structure (BIDS), which specifies how brain data must be managed. Our open-source tool can import and export data from Muse devices and allows viewing real-time brain data, and the BIDS exporting capabilities can be successfully validated following the available guidelines. Moreover, these and other functional and nonfunctional features were validated by involving five experts as validators through the DESMET method, and a latency analysis was also performed and discussed. The results of these validation activities were successful at collecting and managing electroencephalogram data.
Thesis
Full-text available
Recent advancements in the field of neuroscience have brought to light the paramount importance of comprehending the intricacies of large-scale brain networks and the underlying dynamics that govern them. In this study, I present a pioneering toolbox that offers a novel and efficient approach for integrating high-dimensional Neural Mass Models (NMMs). These NMMs are defined by two crucial components: a set of nonlinear Random Differential Equations (RDEs) that dictate the dynamics of each neural mass, and a highly sparse three-dimensional Connectome Tensor (CT) that encodes vital information regarding the strength of connections and the delays of information transfer along the axons. To date, it has been commonplace to make simplistic assumptions about delays within the CT, often assuming them to be Dirac-delta functions. However, the reality is far more complex. Delays in information propagation are distributed due to the heterogeneous conduction velocities of the axons connecting neural masses. Modeling these distributed delays in the CT poses significant challenges. To address this issue, the approach presented in this paper incorporates several innovative techniques, including a tensor representation of the CT, which not only facilitates parallel computation but also enables the modeling of distributed delays with varying levels of complexity and realism. This tensor-based approach revolutionizes the field by offering flexibility in capturing the intricate dynamics of brain networks. By accommodating models with distributed-delay CTs, the toolbox paves the way for a more comprehensive understanding of brain functioning. Furthermore, the toolbox employs a semi-analytical integration of RDEs using the Local Linearization (LL) scheme for each neural mass model. This scheme ensures a high level of fidelity to the original continuous-time nonlinear dynamics, while also contributing to computational efficiency. One of the significant advantages of this approach is its ease of implementation. Unlike traditional methods, which often suffer from quadratic scalability, the algorithm proposed in this paper exhibits linear scalability with respect to the number of neural masses and the number of equations representing them. This breakthrough opens up new possibilities for large-scale simulations and computational studies of brain networks, propelling the field forward. To showcase the practical utility of the toolbox, I conducted a series of simulations. These included the simulation of a single Zetterberg-Jansen-Rit (ZJR) cortical column, a single thalamo-cortical unit, and a toy example featuring an interconnected network of 1000 ZJR columns. These simulations effectively demonstrated the impact of altering the CT, notably by introducing scattered delays. The examples highlighted the complexities of describing EEG oscillations, such as split alpha peaks arising solely from unique neuronal masses. By offering an open-source Matlab Live Script, I aim to empower researchers and facilitate this toolbox's widespread adoption and exploration, fostering collaboration and advancing our understanding of brain networks. This work presents a groundbreaking toolbox that revolutionizes the integration of high-dimensional Neural Mass Models by effectively incorporating the dynamics specified by nonlinear RDEs and the intricate connectivity patterns represented by the CT. By addressing the challenges posed by distributed delays and providing an efficient computational framework, this toolbox provides new avenues for investigating large-scale brain networks and enhances our understanding of their complex dynamics.
Article
Estimating physiological parameters - such as blood pressure (BP) - from raw sensor data captured by noninvasive, wearable devices rely on either burdensome manual feature extraction designed by domain experts to identify key waveform characteristics and phases, or deep learning (DL) models that require extensive data collection. We propose the Data-Driven Guided Attention (DDGA) framework to optimize DL models to learn features supported by the underlying physiology and physics of the captured waveforms, with minimal expert annotation. With only a single template waveform cardiac cycle and its labelled fiducial points, we leverage dynamic time warping (DTW) to annotate all other training samples. DL models are trained to first identify them before estimating BP to inform them which regions of the input represent key phases of the cardiac cycle, yet we still grant the flexibility for DL to determine the optimal feature set from them. In this study, we evaluate DDGA's improvements to a BP estimation task for three prominent DL-based architectures with two datasets: 1) the MIMIC-III waveform dataset with ample training data and 2) a bio-impedance (Bio-Z) dataset with less than abundant training data. Experiments show that DDGA improves personalized BP estimation models by an average 8.14% in root mean square error (RMSE) when there is an imbalanced distribution of target values in a training set and improves model generalizability by an average 4.92% in RMSE when testing estimation of BP value ranges not previously seen in training.