Example data discretization. (A) 200 example data points were randomly 

Example data discretization. (A) 200 example data points were randomly 

Source publication
Article
Full-text available
Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information th...

Similar publications

Article
Full-text available
Background: Although there are numerous success records of Content and Language Integrated Learning (CLIL) implementation in educational settings and its principles have been found effective, the impact of the whole educational methodology may be overrated. Method: A comprehensive search conducted to retrieve articles published between August 2005...
Article
Full-text available
Compression and swelling index parameters, obtained from consolidation test, are used to calculate settlement for normally and over-consolidated soils respectively. When the conditions are not suitable to perform that test, various alternative methods are investigated to get those parameters without carrying out the consolidation test. In this stud...
Article
Full-text available
A statistical test is presented to address the null hypothesis that sea-level fluctuations in the open ocean are solely due to additive noise in the wind stress. The effect of high-frequency wind stress variations can be represented as a correlated additive and multiplicative noise (CAM) stochastic model of sea-level variations. The main novel aspe...
Preprint
Full-text available
Information processing in the brain is conducted by a concerted action of multiple neural populations. Gaining insights in the organization and dynamics of such populations can best be studied with broadband intracranial recordings of so-called extracellular field potential, reflecting neuronal spiking as well as mesoscopic activities, such as wave...
Conference Paper
Full-text available
Well geographically distributed high temporal resolution solar irradiance data is scarce, resulting in many studies using mean hourly irradiance profiles as an input. This research demonstrates that by taking readily available mean hourly meteorological observations of okta, wind speed, cloud height and pressure; 1-minute resolution irradiance time...

Citations

... These results together suggested that the beta oscillation may act as a prominent mediator for inter-regional communications among the ACC, AI, and amygdala during perception of other's pain. We further disentangled the directionality of the observed empathy-related low-frequency synchronization by computing transfer entropy (TE), which measures the directionality of information flow between two brain regions 48 . For each channel pair, we computed bidirectional TE (e.g., from AI to amygdala and from the amygdala to AI) across the stimulus-presentation time window. ...
... Transfer entropy analysis. Given that power correlation was an undirected measure which cannot convey directional information about inter-regional interactions 45,93 , we further estimated the directionality of power correlation results by computing TE values using the neuroscience information theory toolbox 48 . The choice of using TE to index effective connectivity was based on previous work 94 , which showed that TE, as compared to Granger Causality and other model-based approaches, is a model-free measure of effective connectivity based on information theory and better at detecting effective connectivity for nonlinear interactions. ...
... Since previous research has not examined the electrophysiological basis for effective connectivity within human empathy network, and we had no clear assumptions about the interaction patterns, we thus took the advantages of TE and its exploratory nature to reveal the directions of the observed inter-regional low-frequency coupling. Specifically, TE measured the mutual information exchange between Y at lag t 1 and X at t 0 given the information about Y at t 0 48 . We computed bidirectional TE (i.e., TE from the AI to the amygdala, as well as from the amygdala to the AI) of the minimally smoothed power data (smoothed with a 50 ms time window) across the stimulus-presentation time window for each epoch, each frequency, and each channel pair at a lag of 10 ms. ...
Article
Full-text available
Empathy enables understanding and sharing of others’ feelings. Human neuroimaging studies have identified critical brain regions supporting empathy for pain, including the anterior insula (AI), anterior cingulate (ACC), amygdala, and inferior frontal gyrus (IFG). However, to date, the precise spatio-temporal profiles of empathic neural responses and inter-regional communications remain elusive. Here, using intracranial electroencephalography, we investigated electrophysiological signatures of vicarious pain perception. Others’ pain perception induced early increases in high-gamma activity in IFG, beta power increases in ACC, but decreased beta power in AI and amygdala. Vicarious pain perception also altered the beta-band-coordinated coupling between ACC, AI, and amygdala, as well as increased modulation of IFG high-gamma amplitudes by beta phases of amygdala/AI/ACC. We identified a necessary combination of neural features for decoding vicarious pain perception. These spatio-temporally specific regional activities and inter-regional interactions within the empathy network suggest a neurodynamic model of human pain empathy.
... What makes information theory a useful analytical tool in neuroscience is its model independence, applicable to any mixture of multivariate data, with linear and non-linear interactions [54][55][56] . However, its applicability to causal analysis must be taken with caution. ...
... Entropy, mutual information, and Kullback-Leibler (KL) divergence are fundamental concepts of information theory [1]. Originally introduced in the field of communication [2], information theory has now found uses in a diverse set of disciplines, including artificial intelligence [3], Earth and environmental science [4], experimental design [5], neuroscience [6], and finance and economics [7]. Its wide-ranging applications stem from its solid foundation in probability theory. ...
... I(X; Y) = D KL (P(x, y) || P(x)P(y)) = ∑ x∈A X ∑ y∈A Y P(x, y) log 2 P(x, y) P(x)P(y) (6) Mutual information is positive and symmetric, i.e., I(X; Y) = I(Y; X) and I(X; Y) = 0, only when Equation (5) is true. Using the rules of conditional probabilities, namely P(x, y) = P(y)P(x | y), one arrives at the more common definition for mutual information: ...
Article
Full-text available
Using information-theoretic quantities in practical applications with continuous data is often hindered by the fact that probability density functions need to be estimated in higher dimensions, which can become unreliable or even computationally unfeasible. To make these useful quantities more accessible, alternative approaches such as binned frequencies using histograms and k-nearest neighbors (k-NN) have been proposed. However, a systematic comparison of the applicability of these methods has been lacking. We wish to fill this gap by comparing kernel-density-based estimation (KDE) with these two alternatives in carefully designed synthetic test cases. Specifically, we wish to estimate the information-theoretic quantities: entropy, Kullback–Leibler divergence, and mutual information, from sample data. As a reference, the results are compared to closed-form solutions or numerical integrals. We generate samples from distributions of various shapes in dimensions ranging from one to ten. We evaluate the estimators’ performance as a function of sample size, distribution characteristics, and chosen hyperparameters. We further compare the required computation time and specific implementation challenges. Notably, k-NN estimation tends to outperform other methods, considering algorithmic implementation, computational efficiency, and estimation accuracy, especially with sufficient data. This study provides valuable insights into the strengths and limitations of the different estimation methods for information-theoretic quantities. It also highlights the significance of considering the characteristics of the data, as well as the targeted information-theoretic quantity when selecting an appropriate estimation technique. These findings will assist scientists and practitioners in choosing the most suitable method, considering their specific application and available data. We have collected the compared estimation methods in a ready-to-use open-source Python 3 toolbox and, thereby, hope to promote the use of information-theoretic quantities by researchers and practitioners to evaluate the information in data and models in various disciplines.
... As a deliberately asymmetric measure, TE is frequently employed to deduce the directionality of information flow and, consequently, the causal relationship between the two processes. 28,45 TE has been proposed as an effective tool for investigating causality within complex systems, especially in computational neuroscience, 44,[46][47][48] and in physiological systems such as cardiorespiratory interactions. [49][50][51][52][53] In particular, TE has also been employed to analyse electroencephalogram signals in individuals with mood disorders. ...
Article
Full-text available
Background Sleep and circadian rhythm disruptions are common in patients with mood disorders. The intricate relationship between these disruptions and mood has been investigated, but their causal dynamics remain unknown. Methods We analysed data from 139 patients (76 female, mean age = 23.5 ± 3.64 years) with mood disorders who participated in a prospective observational study in South Korea. The patients wore wearable devices to monitor sleep and engaged in smartphone-delivered ecological momentary assessment of mood symptoms. Using a mathematical model, we estimated their daily circadian phase based on sleep data. Subsequently, we obtained daily time series for sleep/circadian phase estimates and mood symptoms spanning >40,000 days. We analysed the causal relationship between the time series using transfer entropy, a non-linear causal inference method. Findings The transfer entropy analysis suggested causality from circadian phase disturbance to mood symptoms in both patients with MDD (n = 45) and BD type I (n = 35), as 66.7% and 85.7% of the patients with a large dataset (>600 days) showed causality, but not in patients with BD type II (n = 59). Surprisingly, no causal relationship was suggested between sleep phase disturbances and mood symptoms. Interpretation Our findings suggest that in patients with mood disorders, circadian phase disturbances directly precede mood symptoms. This underscores the potential of targeting circadian rhythms in digital medicine, such as sleep or light exposure interventions, to restore circadian phase and thereby manage mood disorders effectively. Funding 10.13039/501100010446Institute for Basic Science, the Human Frontiers Science Program Organization, the 10.13039/501100003725National Research Foundation of Korea, and the Ministry of Health & Welfare of South Korea.
... To address this, we leveraged Partial Information Decomposition (PID) (33), which allows to decomposed the information dynamics between brain regions into synergistic and redundant interactions (45). Here, redundancy implies that two brain regions encode the prediction error in the same way, indicating a common encoding mechanism (33,45,79). In contrast, synergy reflects the tendency of the two brain regions to complementarily encode the error signal, indicating a distributed encoding mechanism (33,44,45). ...
Preprint
Full-text available
Predictive coding theories propose that the brain constantly updates its internal models of the world to minimize prediction errors and optimize sensory processing. However, the neural mechanisms that link the encoding of prediction errors and optimization of sensory representations remain unclear. Here, we provide direct evidence how predictive learning shapes the representational geometry of the human brain. We recorded magnetoencephalography (MEG) in human participants listening to acoustic sequences with different levels of regularity. Representational similarity analysis revealed how, through learning, the brain aligned its representational geometry to match the statistical structure of the sensory inputs, by clustering the representations of temporally contiguous and predictable stimuli. Crucially, we found that in sensory areas the magnitude of the representational shift correlated with the encoding strength of prediction errors. Furthermore, using partial information decomposition we found that, prediction errors were processed by a synergistic network of high-level associative and sensory areas. Importantly, the strength of synergistic encoding of precition errors predicted the magnitude of representational alignment during learning. Our findings provide evidence that large-scale neural interactions engaged in predictive processing modulate the representational content of sensory areas, which may enhance the efficiency of perceptual processing in response to the statistical regularities of the environment.
... However, this last variable merges the redundant and synergistic effects of predictors (38). Instead, partial information decomposition (PID) is a mathematical framework proposed by Williams and Beer (31) that is capable of decomposing the total information provided by a set of variables, called sources, about a target variable into clearly interpretable atoms of partial information (46,47). In the bivariate case (2 sources and 1 target), PID outputs four atoms of partial information: two atoms of information exclusively provided by each of the two sources ("unique"), one accounting for the information only available when the two sources are considered together and never accessible when looking at one source at a time ("synergistic"), and the last one for information shared between the two sources ("redundant"). ...
Article
The human brain tracks available speech acoustics and extrapolates missing information such as the speaker's articulatory patterns. However, the extent to which articulatory reconstruction supports speech perception remains unclear. This study explores the relationship between articulatory reconstruction and task difficulty. Participants listened to sentences and performed a speech-rhyming task. Real kinematic data of the speaker's vocal tract were recorded via electromagnetic articulography (EMA) and aligned to corresponding acoustic outputs. We extracted articulatory synergies from the EMA data using Principal Component Analysis (PCA) and employed Partial Information Decomposition (PID) to separate the electroencephalographic (EEG) encoding of acoustic and articulatory features into unique, redundant, and synergistic atoms of information. We median-split sentences into easy (ES) and hard (HS) based on participants' performance and found that greater task difficulty involved greater encoding of unique articulatory information in the theta band. We conclude that fine-grained articulatory reconstruction plays a complementary role in the encoding of speech acoustics, lending further support to the claim that motor processes support speech perception.
... While important work has been done in this area (see Churchland, 1986), neuroscientists have yet to formally construct a theory that conceptualizes the work they conduct. Neuroscientists often employ theories to help them frame their scientific procedures (Bello-Morales & Delgado-García, 2015;Frank & Badre, 2015;Timme & Lapish, 2018), but neuroscience has been and continues to be as a discipline of scientific methodology, where theory does not apply to the discipline itself. ...
Article
Full-text available
Thoughts, emotions, and behaviors are linked fundamentally to the environments one inhabits. The person-in-environment perspective effectively captures these three aspects of the human experience and serves as a central fixture within social work research and practice. Many social workers use this perspective to guide every facet of the work they undertake, from case conceptualization to ethics of human subject research. At the same time, recent advancements in human neuroscience research and neuroimaging technologies have inspired social workers to embrace how the nervous system is integrally interconnected with one’s environments. In turn, human neuroscience has catalyzed more biologically-informed practice and research in the field of social work, centered on elucidating social and psychological developmental domains within systems. The popularity of the person-in-environment perspective and the integration of human neuroscience in the field of social work has created a nexus that heretofore has not been adequately integrated into the literature. The present paper addresses this gap with a novel theory known as neurosocial interdependence, which integrates insights from human neuroscience into the framework of the person-in-environment perspective. This paper also bolsters the development of the theory of neurosocial interdependence by introducing a novel testing instrument and measurement scale, exploring how these tools might be used to implement the theory of neurosocial interdependence within social work research and clinical settings.
... Completely overlapping firing rate distributions (in response to deviants and standards) would yield an AUC = 0.5, while separate distributions would result in AUC = 1. On the other hand, Mutual Information is defined as the reduction in the uncertainty of one variable (in this case the neuronal response) due to the knowledge of another (in this case, the type of stimulus; Timme and Lapish, 2018). In other words, this measure reflects how much information a neural response (firing rate) contains about the type of stimulus that is played. ...
Article
Full-text available
A fundamental property of sensory systems is their ability to detect novel stimuli in the ambient environment. The auditory brain contains neurons that decrease their response to repetitive sounds but increase their firing rate to novel or deviant stimuli; the difference between both responses is known as stimulus-specific adaptation or neuronal mismatch (nMM). Here, we tested the effect of microiontophoretic applications of ACh on the neuronal responses in the auditory cortex (AC) of anesthetized rats during an auditory oddball paradigm, including cascade controls. Results indicate that ACh modulates the nMM, affecting prediction error responses but not repetition suppression, and this effect is manifested predominantly in infragranular cortical layers. The differential effect of ACh on responses to standards, relative to deviants (in terms of averages and variances), was consistent with the representational sharpening that accompanies an increase in the precision of prediction errors. These findings suggest that ACh plays an important role in modulating prediction error signaling in the AC and gating the access of these signals to higher cognitive levels.
... A complementary perspective to study information processing in the brain is offered by the analysis of salient events, which generally occur in aperiodic bursts (44). In fact, from the early cybernetic interpretation of the brain as a "digital" machine (45) to the more recent applications of information theory tools (46), it was shown that a great deal of information can be extracted by binarizing the signals. In this work, we demonstrated this principle by analyzing an iEEG dataset during naturalistic acoustic stimuli preserving only a small percentage of data points corresponding to neuronal avalanches. ...
... In this context, the "digital" framework of the AP and TP could offer a time-resolved and computationally advantageous proxy of timescales dynamics. Besides computational efficiency, binarization allows the deployment of a series of conceptual tools such as information theory (46), the study of network dynamics (18), as well as paradigms such as brain criticality (53,54). In relation to the latter, we highlight that, in standard neuronal avalanches analyses aimed at testing the presence of a critical state in the brain, the bin size value chosen for the data binarization plays an important role. ...
Preprint
Full-text available
Neuronal avalanches are cascade-like events ubiquitously observed across imaging modalities and scales. Aperiodic timing and topographic distribution of these events have been related to the systemic physiology of brain states. However, it is still unknown whether neuronal avalanches are correlates of cognition, or purely reflect physiological properties. In this work, we investigate this question by analyzing intracranial recordings of epileptic participants during rest and passive listening of naturalistic speech and music stimuli. During speech or music listening, but not rest, participants’ brains “tick” together, as the timing of neuronal avalanches is stimulus-driven and hence correlated across participants. Auditory regions are strongly participating in coordinated neuronal avalanches, but also associative regions, indicating both the specificity and distributivity of cognitive processing. The subnetworks where such processing takes place during speech and music largely overlap, especially in auditory regions, but also diverge in associative cortical sites. Finally, differential pathways of avalanche propagation across auditory and non-auditory regions differentiate brain network dynamics during speech, music and rest. Overall, these results highlight the potential of neuronal avalanches as a neural index of cognition. Author’s summary Neuronal avalanches consist of collective network events propagating across the brain in short-lived and aperiodic instances. These salient events have garnered a great interest for studying the physics of cortical dynamics, and bear potential for studying brain data also in purely neuroscientific contexts. In this work we investigated neuronal avalanches to index cognition, analyzing an intracranial stereo electroencephalography (iEEG) dataset during speech, music listening and resting state in epileptic patients. We show that neuronal avalanches are consistently driven by music and speech stimuli: avalanches co-occur in participants listening to the same auditory stimulus; avalanche topography differs from resting state, presenting partial similarities during speech and music; avalanche propagation changes during speech, music, and rest conditions, especially along the pathways between auditory and non auditory regions. Our work underlines the distributed nature of auditory stimulus processing, supporting neuronal avalanches as a valuable and computationally advantageous framework for the study of cognition in humans.
... For the SNR calculations the positive amplitude of the spikes were used. Using the information theory toolbox developed by Timme and Lapish [28], Information theory metrics like Shannon Entropy and Mutual information were computed, with spike counts serving as the encoding variable. To perform these calculations, the spike counts were partitioned into 32 uniform width bins i.e. 32 states. ...
... While the SNR provides a valuable measure of signal quality, it does not indicate the amount of information that can be inferred from the signal. To address this, we employed Shannon Entropy, a widely used information theory measure, to determine the information content of the signal or dataset [28]. For each fascicle in each rat trial, we calculated the Shannon Entropy and determined the mean and standard deviation of the information content across all rats and fascicles. ...
... The mutual information between two signals determines the amount of information that can be inferred about a signal from the other i.e. the amount of dependent information that is common between the two. The mutual information between pairs of fascicles for each trial has been calculated using the information theory toolbox developed by Timme and Lapish [28]. The mean mutual information for each pair of fascicles for all rats was then calculated and tested to determine whether the information recorded by the electrodes was independent between the fascicles. ...
Article
Full-text available
Objective. The primary challenge faced in the field of neural rehabilitation engineering is the limited advancement in nerve interface technology, which currently fails to match the mechanical properties of small-diameter nerve fascicles. Novel developments are necessary to enable long-term, chronic recording from a multitude of small fascicles, allowing for the recovery of motor intent and sensory signals.Approach. In this study, we analyze the chronic recording capabilities of Carbon nanotube yarn (CNTY) electrodes in the peripheral somatic nervous system. The electrodes were surgically implanted in the sciatic nerve's three individual fascicles in rats, enabling the recording of neural activity during gait. Signal-to-noise ratio (SNR) and information theory were employed to analyze the data, demonstrating the superior recording capabilities of the electrodes. Flat Interface Nerve Electrode (FINE) and Thin-Film Longitudinal Intrafascicular Electrode (tf-LIFE) electrodes were used as a references to asses the results from SNR and information theory analysis.Main results. The electrodes exhibited the ability to record chronic signals with SNRs reaching as high as 15dB, providing 12 bits of information for the sciatic nerve, a significant improvement over previous methods. Furthermore, the study revealed that the SNR and information content of the neural signals remained consistent over a period of 12 weeks across three different fascicles, indicating the stability of the interface. The signals recorded from these electrodes were also analyzed for selectivity using information theory metrics, which showed an information sharing of approximately 1.4 bits across the fascicles.Significance. The ability to safely and reliably record from multiple fascicles of different nerves simultaneously over extended periods of time holds substantial implications for the field of neural and rehabilitation engineering. This advancement addresses the limitation of current nerve interface technologies and opens up new possibilities for enhancing neural rehabilitation and control.