Figure 5 - uploaded by Harold de Vladar
Content may be subject to copyright.
Convergence of the actual best fitness of the metapopulation with increasing problem size N on general building block function landscape. Each curve is an average of 10 independent iterations. N D = 10 × 10 demes, N A = 10 networks per deme, N neurons per network, patterns of length N are partitioned to blocks of size P = 10 (B = N/P blocks per pattern), p rec = 0.1, μ R = 1/N, p migr = 0.004, N T = 5. Note the logarithmic x axis. Inset: single simulation at N = 80, B = 8 (other parameters are the same). Plateaus roughly correspond to more and more blocks being optimized by finding the best subsequence on the building-block fitness landscape.  

Convergence of the actual best fitness of the metapopulation with increasing problem size N on general building block function landscape. Each curve is an average of 10 independent iterations. N D = 10 × 10 demes, N A = 10 networks per deme, N neurons per network, patterns of length N are partitioned to blocks of size P = 10 (B = N/P blocks per pattern), p rec = 0.1, μ R = 1/N, p migr = 0.004, N T = 5. Note the logarithmic x axis. Inset: single simulation at N = 80, B = 8 (other parameters are the same). Plateaus roughly correspond to more and more blocks being optimized by finding the best subsequence on the building-block fitness landscape.  

Source publication
Article
Full-text available
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the...

Context in source publication

Context 1
... have investigated the performance of search in a metapopula- tion with different problem sizes (pattern lengths; see Figure 5). Results indicate that despite the vastness of the search space, the metapopulation is always able to converge to the global optimum, given enough time. ...

Similar publications

Preprint
Full-text available
Our purpose is to address the biological problem of finding foundations of the organization in the collective activity among cell networks in the nervous system, at the meso/macroscale, giving rise to cognition and consciousness. But in doing so, we encounter another problem related to the interpretation of methods to assess the neural interactions...

Citations

... In animal brains, such representational units might be neural ensembles, scaling up from a low number of loosely functionally connected ensembles to a large number with a high-accuracy information copying ability between them [25][26][27][28][29] . In artificial agents, a small number of appropriate architectural and hyperparameter choices might lead to informational replication and Darwinian evolution over representations [30][31][32][33] . ...
... There are at least three distinctive contexts in which the mechanism proposed in this paper potentially applies: (i) Darwinian neurodynamics, the hypothesis that Darwinian dynamics unfolds in human or animal brains [25][26][27][28][29] , (ii) emergence of Darwinian dynamics within artificial agents [30][31][32][33] , and (iii) Downwards (or filial) transitions of individuality, from Darwinian groups of non-Darwinian individuals to Darwinian groups of Darwinian individuals 26,39,40 . Figure 1A illustrates the mechanism and the three proposed contexts. ...
... This can be seen as an adaptive evolutionary route to the algorithmic emergence of what is called Darwinian neurodynamics, the hypothesis that Darwinian dynamics over neural representations unfold in human or animal brains. This proposed mechanism complements previous work on (i) models of neural information copying 27,29,[44][45][46] , (ii) experimental findings suggesting that in vitro information transfer between neural ensembles can emerge without explicit training 47,48 , (iii) proposed cognitive markers of a Darwinian search in hypothesis space 28 . ...
Preprint
Full-text available
The emergence of self-replication in chemical space has led to an explosive diversification of form and function. It is hypothesized that a similar process underlies human action selection in complex combinatorial spaces, such as the space of simulated action sequences. Furthermore, the spontaneous appearance of a non-predesigned evolutionary search in artificial agents might lead to a higher degree of open-endedness, arguably a key missing component of current machine intelligence. In this paper we design a computational model to show that Darwinian evolutionary dynamics over informational units can emerge if collectives of such units need to infer statistics of changing environments. We build our argument on a series of equivalences between Bayesian computations and replicator dynamics to demonstrate that the selective advantage of higher information transmission ability between units and of larger population size appear very early on, already at no consistently shared information between the population size of two units. Further selection for statistical inference at the collective level leads to a continuous increase of transmission fidelity and population size until the population reaches the ability to maintain and iteratively improve combinatorial information, a transition to the regime of Darwinian evolution. Candidate systems include prebiotic collectives of non-replicating molecules, collectives of neural ensembles representing competing action plans, and reinforcement learning agents with parallel policy search.
... First, to explain the adaptive potential of any Darwinian dynamics in Nature, both those that we already have plenty of observations and understanding about (Darwinian processes on a genetic basis), but also those that have not yet been fully acknowledged, and the usefulness of replicator-based modeling is questionable at this point, such as memetics [26,27], Darwinian neurodynamics [28,29] or quantum Darwinism [30]. Such explanations would mostly be based on either i) finding global cost functions that evolutionary system emergently optimize or ii) relating the computations performed by the system to probabilistic computations that optimally extract information from external data. ...
Preprint
Full-text available
A wide variety of human and non-human behavior is computationally well accounted for by probabilistic generative models, formalized consistently in a Bayesian framework. Recently, it has been suggested that another family of adaptive systems, namely, those governed by Darwinian evolutionary dynamics, are capable of implementing building blocks of Bayesian computations. These algorithmic similarities rely on the analogous competition dynamics of generative models and of Darwinian replicators to fit possibly high-dimensional and stochastic environments. Identified computational building blocks include Bayesian update over a single variable and replicator dynamics, transition between hidden states and mutation, and Bayesian inference in hierarchical models and multilevel selection. Here we provide a coherent mathematical discussion of these observations in terms of Bayesian graphical models and a step-by-step introduction to their evolutionary interpretation. We also extend existing results by adding two missing components: a correspondence between likelihood optimization and phenotypic adaptation, and between expectation-maximization-like dynamics in mixture models and ecological competition. These correspondences suggest a deeper algorithmic analogy between evolutionary dynamics and statistical learning, pointing towards a unified computational understanding of mechanisms Nature invented to adapt to high-dimensional and uncertain environments.
... There are already several neural components available to build a minimal operational version of this model (Stewart et al., 2011;Fernando et al., 2010;Szilágyi et al., 2016). But there are also many open problems before we can reach the same level of operational Fig. 2. Experimental results from simulation of tutor-learner interactions. ...
Article
Full-text available
The well-established framework of evolutionary dynamics can be applied to the fascinating open problems how human brains are able to acquire and adapt language and how languages change in a population. Schemas for handling grammatical constructions are the replicating unit. They emerge and multiply with variation in the brains of individuals and undergo selection based on their contribution to needed expressive power, communicative success and the reduction of cognitive effort. Adopting this perspective has two major benefits. (i) It makes a bridge to neurobiological models of the brain that have also adopted an evolutionary dynamics point of view, thus opening a new horizon for studying how human brains achieve the remarkably complex competence for language. And (ii) it suggests a new foundation for studying cultural language change as an evolutionary dynamics process. The paper sketches this novel perspective, provides references to empirical data and computational experiments, and points to open problems.
... In de and Szilágyi et al. (2016), we describe an instance of a neural implementation for a cognitive architecture and show how the synergy between selection and learning can solve pattern-matching problems. Here, we take these ideas a step further to demonstrate the problem solving capabilities of Darwinian Neurodynamics in a task that is more relevant to understanding cognition. ...
... Instead, we aimed at explaining the mechanistic effect of training and priming on problem solving. In our Experiment 1, we tried to reproduce the difference between the control group (1 out of 31 participants, 3% solved the problem in the given 4 min) and the combined training group with picture clues (16 participants out of 28, 57% solved the task; Kershaw, 2016, Personal communication, 28 June) in Kershaw et al.'s (2013) experiment, to provide a benchmark for our cognitive architecture Szilágyi et al., 2016). We ran 30 simulations in both conditions and compared the problem solving behavior and performance of the models. ...
... Our model is also described in de and Szilágyi et al. (2016). The MATLAB code of the model, the parameters and scripts for running and analyzing the experiments can be downloaded from osf.io/vjfv9. ...
Article
Full-text available
In this paper, we show that a neurally implemented a cognitive architecture with evolutionary dynamics can solve the four-tree problem. Our model, called Darwinian Neurodynamics, assumes that the unconscious mechanism of problem solving during insight tasks is a Darwinian process. It is based on the evolution of patterns that represent candidate solutions to a problem, and are stored and reproduced by a population of attractor networks. In our first experiment, we used human data as a benchmark and showed that the model behaves comparably to humans: it shows an improvement in performance if it is pretrained and primed appropriately, just like human participants in Kershaw et al. (2013)'s experiment. In the second experiment, we further investigated the effects of pretraining and priming in a two-by-two design and found a beginner's luck type of effect: solution rate was highest in the condition that was primed, but not pretrained with patterns relevant for the task. In the third experiment, we showed that deficits in computational capacity and learning abilities decreased the performance of the model, as expected. We conclude that Darwinian Neurodynamics is a promising model of human problem solving that deserves further investigation.
Article
Full-text available
Efficient search in vast combinatorial spaces, such as those of possible action sequences, linguistic structures, or causal explanations, is an essential component of intelligence. Is there any computational domain that is flexible enough to provide solutions to such diverse problems and can be robustly implemented over neural substrates? Based on previous accounts, we propose that a Darwinian process, operating over sequential cycles of imperfect copying and selection of neural informational patterns, is a promising candidate. Here we implement imperfect information copying through one reservoir computing unit teaching another. Teacher and learner roles are assigned dynamically based on evaluation of the readout signal. We demonstrate that the emerging Darwinian population of readout activity patterns is capable of maintaining and continually improving upon existing solutions over rugged combinatorial reward landscapes. We also demonstrate the existence of a sharp error threshold, a neural noise level beyond which information accumulated by an evolutionary process cannot be maintained. We introduce a novel analysis method, neural phylogenies, that displays the unfolding of the neural-evolutionary process.
Preprint
Full-text available
In this paper, we performed two types of software experiments to study the numerosity classification (subitizing) in humans and machines. Experiments focus on a particular kind of task is referred to as Semantic MNIST or simply SMNIST where the numerosity of objects placed in an image must be determined. The experiments called SMNIST for Humans are intended to measure the capacity of the Object File System in humans. In this type of experiment the measurement result is in well agreement with the value known from the cognitive psychology literature. The experiments called SMNIST for Machines serve similar purposes but they investigate existing, well known (but originally developed for other purpose) and under development deep learning computer programs. These measurement results can be interpreted similar to the results from SMNIST for Humans. The main thesis of this paper can be formulated as follows: in machines the image classification artificial neural networks can learn to distinguish numerosities with better accuracy when these numerosities are smaller than the capacity of OFS in humans. Finally, we outline a conceptual framework to investigate the notion of number in humans and machines.
Article
Full-text available
We propose an evolutionary perspective to classify and characterize the diverse systems of adaptive immunity that have been discovered across all major domains of life. We put forward a new function‐based classification according to the way information is acquired by the immune systems: D arwinian immunity (currently known from, but not necessarily limited to, vertebrates) relies on the D arwinian process of clonal selection to ‘learn’ by cumulative trial‐and‐error feedback; L amarckian immunity uses templated targeting (guided adaptation) to internalize heritable information on potential threats; finally, shotgun immunity operates through somatic mechanisms of variable targeting without feedback. We argue that the origin of D arwinian (but not L amarckian or shotgun) immunity represents a radical innovation in the evolution of individuality and complexity, and propose to add it to the list of major evolutionary transitions. While transitions to higher‐level units entail the suppression of selection at lower levels, D arwinian immunity re‐opens cell‐level selection within the multicellular organism, under the control of mechanisms that direct, rather than suppress, cell‐level evolution for the benefit of the individual. From a conceptual point of view, the origin of D arwinian immunity can be regarded as the most radical transition in the history of life, in which evolution by natural selection has literally re‐invented itself. Furthermore, the combination of clonal selection and somatic receptor diversity enabled a transition from limited to practically unlimited capacity to store information about the antigenic environment. The origin of D arwinian immunity therefore comprises both a transition in individuality and the emergence of a new information system – the two hallmarks of major evolutionary transitions. Finally, we present an evolutionary scenario for the origin of D arwinian immunity in vertebrates. We propose a revival of the concept of the ‘ B ig B ang’ of vertebrate immunity, arguing that its origin involved a ‘difficult’ (i.e. low‐probability) evolutionary transition that might have occurred only once, in a common ancestor of all vertebrates. In contrast to the original concept, we argue that the limiting innovation was not the generation of somatic diversity, but the regulatory circuitry needed for the safe operation of amplifiable immune responses with somatically acquired targeting. Regulatory complexity increased abruptly by genomic duplications at the root of the vertebrate lineage, creating a rare opportunity to establish such circuitry. We discuss the selection forces that might have acted at the origin of the transition, and in the subsequent stepwise evolution leading to the modern immune systems of extant vertebrates.
Conference Paper
Based on previous results on language games here I study cultural dynamics extended in spatial environments. The underlying model makes assumptions regarding cognitive aspects of the individuals based on the Neuronal Replicator hypothesis. Although I assume a simple and minimal version of cultures, this model allows exploring the effects of idiosyncratic as well as externally environmentally imposed preferences on cultural traits. I also study the case of dispersal of individuals and find that this factor is key for the rapid spread of cultural traits.