FIGURE 1 - available via license: CC BY
Content may be subject to copyright.
| Motifs of three coupled Boltzmann neurons. Each motif is characterized by a 3 × 3 weight matrix W (A), defining the connection strength between the neurons. There are 2 3 = 8 possible states Y = 0. .. 7 for each motif. The transition probabilities between these states are summarized in a 8 × 8 state transition matrix (B).

| Motifs of three coupled Boltzmann neurons. Each motif is characterized by a 3 × 3 weight matrix W (A), defining the connection strength between the neurons. There are 2 3 = 8 possible states Y = 0. .. 7 for each motif. The transition probabilities between these states are summarized in a 8 × 8 state transition matrix (B).

Source publication
Article
Full-text available
Recurrent neural networks can produce ongoing state-to-state transitions without any driving inputs, and the dynamical properties of these transitions are determined by the neuronal connection strengths. Due to non-linearity, it is not clear how strongly the system dynamics is affected by discrete local changes in the connection structure, such as...

Contexts in source publication

Context 1
... investigate the set of all possible network motifs that can be built from 3 Boltzmann neurons with ternary connections w ij ∈ {−1, 0, +1}, where self connections w ii are permitted ( Figure 1A). In principle there are 3 9 = 19, 683 possible ternary 3×3 weight matrices. ...
Context 2
... every neuron can be in one of two binary states, a 3- node motif can be in 2 3 = 8 possible motif states. Given the momentary motif state and the weight matrix, the probabilities for all eight successive motif states can be computed, thus defining the 8 × 8 state transition matrix of a Markov process ( Figure 1B). All information theoretical properties of 3-neuron motifs, such as entropy or mutual information of successive states, are determined by the state transition matrix. ...

Citations

... Primarily, this balance is fundamental to the principle of homeostasis, which prevents the brain from overflowing with spikes and keeps the average activity in a certain range (Sprekeler, 2017). It has been shown that strong excitation can provoke irregular activity patterns (Van Vreeswijk andSompolinsky, 1996, 2005;Krauss et al., 2019b;Sanzeni et al., 2022;Calvet et al., 2023), and that an imbalance of excitation and inhibition could be linked to pathologies such as epilepsy (Nelson and Valakh, 2015) and autism (Arviv et al., 2016). In our present context, studies on models (Ehsani and Jost, 2022), in vitro (Sandvig and Fiskum, 2020) and in vivo (Yang et al., 2012), showed that the meticulous balancing of excitatory and inhibitory neurons was also linked to the edge of chaos (Poil et al., 2012). ...
Article
Full-text available
Reservoir Computing (RC) is a paradigm in artificial intelligence where a recurrent neural network (RNN) is used to process temporal data, leveraging the inherent dynamical properties of the reservoir to perform complex computations. In the realm of RC, the excitatory-inhibitory balance b has been shown to be pivotal for driving the dynamics and performance of Echo State Networks (ESN) and, more recently, Random Boolean Network (RBN). However, the relationship between b and other parameters of the network is still poorly understood. This article explores how the interplay of the balance b , the connectivity degree K (i.e., the number of synapses per neuron) and the size of the network (i.e., the number of neurons N ) influences the dynamics and performance (memory and prediction) of an RBN reservoir. Our findings reveal that K and b are strongly tied in optimal reservoirs. Reservoirs with high K have two optimal balances, one for globally inhibitory networks ( b < 0), and the other one for excitatory networks ( b > 0). Both show asymmetric performances about a zero balance. In contrast, for moderate K , the optimal value being K = 4, best reservoirs are obtained when excitation and inhibition almost, but not exactly, balance each other. For almost all K , the influence of the size is such that increasing N leads to better performance, even with very large values of N . Our investigation provides clear directions to generate optimal reservoirs or reservoirs with constraints on size or connectivity.
... Neuronal motifs are defined by specific arrangements of neurons, synapses, and their interactions [20][21][22]. These motifs often recur across various brain regions and species, highlighting their potential significance in shaping neural circuitry. ...
Article
Full-text available
Transmission of weak signals in neural networks is crucial for understanding the functionality of brain. In this work, stochastic resonance (SR) in the three neuron FitzHugh–Nagumo (FHN) motifs and its small-world network with higher order motif interactions are studied. Simulation results show that a single motif induces SR and responds better to high-frequency weak signal. Stronger coupling strength within the motif increases the firing rate of the output neurons, resulting in a more pronounced resonance. Considering only the connections within the motif, a higher in-degree of the output neuron or a shorter minimum path length between input and output neurons will lead to a better response to weak signals. SR phenomena can also be observed in small-world networks composed of these motif. Increasing whether the motif coupling or node coupling strength enhances the firing rate of output neurons, amplifying the response. There is a very strong correlation between firing rate of output neurons and response. Our results may provide insights into the propagation of weak signals in higher order networks and the selection of appropriate network topology.
... In previous studies, we systematically analyzed the structural and dynamical properties of very small RNNs, that is, three-neuron motifs (Krauss, Zankl et al., 2019), as well as large RNNs (Krauss, Schuster et al., 2019). Furthermore, we investigated resonance phenomena in RNNs. ...
... By colorcoding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize, for instance, word class distributions of different linguistic corpora (Schilling, Tomasello et al., 2021), hidden layer representations (embeddings) of artificial neural networks Schilling, Maier et al., 2021), structure and dynamics of recurrent neural networks (Krauss, Prebeck et al., 2019;Krauss, Schuster et al., 2019;Krauss, Zankl et al., 2019), or brain activity patterns assessed during, for example, pure tone or speech perception (Krauss, Metzner et al., 2018;Schilling, Tomasello et al., 2021), or even during sleep (Krauss, Schilling et al., 2018;Metzner et al., 2023;Traxdorf et al., 2019). In all these cases, the apparent compactness and mutual overlap of the point clusters permit a qualitative assessment of how well the different classes separate. ...
Article
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network’s connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
... By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora [23], hidden layer representations (embeddings) of artificial neural networks [49], [50], structure and dynamics of highly recurrent neural networks [51]- [54], or brain activity patterns assessed during e.g. pure tone or speech perception [23], [55], or even during sleep [47], [48], [56], [57]. ...
... By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora [29], hidden layer representations (embeddings) of artificial neural networks [30], [31], structure and dynamics of highly recurrent neural networks [32], [33], [34], [35], or brain activity patterns assessed during e.g. pure tone or speech perception [36], [29], or even during sleep [37], [38], [39], [40]. ...
... As of now, most studies in the field of RC rely on phase diagrams to exhibit a statistical relationship between connectivity, dynamics, and performance (Bertschinger and Natschläger, 2004;Büsing et al., 2010;Snyder et al., 2012;Krauss et al., 2019a;Metzner and Krauss, 2022). These results have been obtained by considering a limited number of reservoirs [from one (Metzner and Krauss, 2022), to 10 ( Bertschinger and Natschläger, 2004), up to 100 (Krauss et al., 2019b)], and with a limited resolution in terms of the control parameter, due to the computational cost of these phase diagrams. ...
... Since the reservoirs are randomly generated, there might be huge differences between them even though the statistics of their connectivity are the same. Indeed, close to the critical point, reservoir steady-state activities exhibit a wide range of dynamics as discussed by statistical studies (Kinouchi and Copelli, 2006;Del Papa et al., 2017;Krauss et al., 2019b), and attractor classification (Seifter and Reggia, 2015;Bianchi et al., 2016;Krauss et al., 2019a,b;Metzner and Krauss, 2022). ...
... By substituting Eq. (7) in Eq. (6), we find b = Erf[1/( √ 2σ ⋆ )], with Erf the error function. Thus, by controlling the weight distribution, our control parameter σ ⋆ drives the excitatory to inhibitory balance and thus the reservoir dynamics, in line with Krauss et al. (2019b) and Metzner and Krauss (2022). Figure 2 shows the relationship between b and σ ⋆ . ...
Article
Full-text available
Reservoir computing provides a time and cost-efficient alternative to traditional learning methods. Critical regimes, known as the “edge of chaos,” have been found to optimize computational performance in binary neural networks. However, little attention has been devoted to studying reservoir-to-reservoir variability when investigating the link between connectivity, dynamics, and performance. As physical reservoir computers become more prevalent, developing a systematic approach to network design is crucial. In this article, we examine Random Boolean Networks (RBNs) and demonstrate that specific distribution parameters can lead to diverse dynamics near critical points. We identify distinct dynamical attractors and quantify their statistics, revealing that most reservoirs possess a dominant attractor. We then evaluate performance in two challenging tasks, memorization and prediction, and find that a positive excitatory balance produces a critical point with higher memory performance. In comparison, a negative inhibitory balance delivers another critical point with better prediction performance. Interestingly, we show that the intrinsic attractor dynamics have little influence on performance in either case.
... By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora [29], hidden layer representations (embeddings) of artificial neural networks [30], [31], structure and dynamics of highly recurrent neural networks [32], [33], [34], [35], or brain activity patterns assessed during e.g. pure tone or speech perception [36], [29], or even during sleep [37], [38], [39], [40]. ...
Preprint
Full-text available
The human brain possesses the extraordinary capability to contextualize the information it receives from our environment. The entorhinal-hippocampal plays a critical role in this function, as it is deeply engaged in memory processing and constructing cognitive maps using place and grid cells. Comprehending and leveraging this ability could significantly augment the field of artificial intelligence. The multi-scale successor representation serves as a good model for the functionality of place and grid cells and has already shown promise in this role. Here, we introduce a model that employs successor representations and neural networks, along with word embedding vectors, to construct a cognitive map of three separate concepts. The network adeptly learns two different scaled maps and situates new information in proximity to related pre-existing representations. The dispersion of information across the cognitive map varies according to its scale - either being heavily concentrated, resulting in the formation of the three concepts, or spread evenly throughout the map. We suggest that our model could potentially improve current AI models by providing multi-modal context information to any input, based on a similarity metric for the input and pre-existing knowledge representations.
... The self-oscillatory neuron ensembles acting as a subset of the reservoir can couple with other non-oscillatory neuron ensembles and cause them to oscillate. Krauss, Zankl, Schilling, Schulze, and Metzner conducted a review of analyses on network motifs, specifically focusing on a three-neuron structure identical to the one shown in Fig. 4B [22]. ...
Preprint
Full-text available
This paper presents a method for reproducing a simple central pattern generator (CPG) using a modified Echo State Network (ESN). Conventionally, the dynamical reservoir needs to be damped to stabilize and preserve memory. However, we find that a reservoir that develops oscillatory activity without any external excitation can mimic the behaviour of a simple CPG in biological systems. We define the specific neuron ensemble required for generating oscillations in the reservoir and demonstrate how adjustments to the leaking rate, spectral radius, topology, and population size can increase the probability of reproducing these oscillations. The results of the experiments, conducted on the time series simulation tasks, demonstrate that the ESN is able to generate the desired waveform without any input. This approach offers a promising solution for the development of bio-inspired controllers for robotic systems.
... By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora 32 , hidden layer representations (embeddings) of artificial neural networks 33,34 , structure and dynamics of recurrent neural networks [35][36][37][38] , or brain activity patterns assessed during e.g. pure tone or speech perception 32,39 , or even during sleep 40,41 . ...
Article
Full-text available
How do we make sense of the input from our sensory organs, and put the perceived information into context of our past experiences? The hippocampal-entorhinal complex plays a major role in the organization of memory and thought. The formation of and navigation in cognitive maps of arbitrary mental spaces via place and grid cells can serve as a representation of memories and experiences and their relations to each other. The multi-scale successor representation is proposed to be the mathematical principle underlying place and grid cell computations. Here, we present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors. The neural network successfully learns the similarities between different animal species, and constructs a cognitive map of ‘animal space’ based on the principle of successor representations with an accuracy of around 30% which is near to the theoretical maximum regarding the fact that all animal species have more than one possible successor, i.e. nearest neighbor in feature space. Furthermore, a hierarchical structure, i.e. different scales of cognitive maps, can be modeled based on multi-scale successor representations. We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class, i.e. amphibians, mammals and insects. This could be a putative mechanism enabling the emergence of new, abstract semantic concepts. Finally, even completely new or incomplete input can be represented by interpolation of the representations from the cognitive map with remarkable high accuracy of up to 95%. We conclude that the successor representation can serve as a weighted pointer to past memories and experiences, and may therefore be a crucial building block to include prior knowledge, and to derive context knowledge from novel input. Thus, our model provides a new tool to complement contemporary deep learning approaches on the road towards artificial general intelligence.
... By color-coding each projected data point of a data set according to its label, the representation of the data can be visualized as a set of point clusters. For instance, MDS has already been applied to visualize for instance word class distributions of different linguistic corpora [37], hidden layer representations (embeddings) of artificial neural networks [38,39], structure and dynamics of highly recurrent neural networks [40][41][42][43], or brain activity patterns assessed during e.g. pure tone or speech perception [37,44], or even during sleep [45][46][47][48]. ...
Preprint
Full-text available
How do humans learn language, and can the first language be learned at all? These fundamental questions are still hotly debated. In contemporary linguistics, there are two major schools of thought that give completely opposite answers. According to Chomsky's theory of universal grammar, language cannot be learned because children are not exposed to sufficient data in their linguistic environment. In contrast, usage-based models of language assume a profound relationship between language structure and language use. In particular, contextual mental processing and mental representations are assumed to have the cognitive capacity to capture the complexity of actual language use at all levels. The prime example is syntax, i.e., the rules by which words are assembled into larger units such as sentences. Typically, syntactic rules are expressed as sequences of word classes. However, it remains unclear whether word classes are innate, as implied by universal grammar, or whether they emerge during language acquisition, as suggested by usage-based approaches. Here, we address this issue from a machine learning and natural language processing perspective. In particular, we trained an artificial deep neural network on predicting the next word, provided sequences of consecutive words as input. Subsequently, we analyzed the emerging activation patterns in the hidden layers of the neural network. Strikingly, we find that the internal representations of nine-word input sequences cluster according to the word class of the tenth word to be predicted as output, even though the neural network did not receive any explicit information about syntactic rules or word classes during training. This surprising result suggests, that also in the human brain, abstract representational categories such as word classes may naturally emerge as a consequence of predictive coding and processing during language acquisition.