Deterministic ADD module li: (ADD(r), lj)  

Deterministic ADD module li: (ADD(r), lj)  

Source publication
Conference Paper
Full-text available
The paper considers spiking neural P systems (SN P systems) with cooperating rules where each neuron has the same number of sets of rules, labelled identically. Each set is called a component (maybe empty). At each step only one of the components can be active for the whole system, and only the rules from the active component are enabled. Each neur...

Contexts in source publication

Context 1
... each label l i of an instruction in M u , we associate a neuron σ li and some auxiliary neurons σ li,q , q = 1, 2, . . ., thus precisely identified by label l i . Specifically, modules ADD and SUB are constructed to simulate the instructions of M u . The modules are given in a graphical form in Figs. 3 and 4. In the initial configuration, all neurons of Π u are empty. There are two additional tasks to solve: to introduce the mentioned spikes in the neurons σ l0 , σ 1 , σ 2 , and to output the computed ...
Context 2
... general, the simulation of an ADD or a SUB instruction starts by intro- ducing two spikes in the neuron with the instruction label (we say that this neuron is activated). Simulating l i : (ADD(r), l j ) (module ADD in Fig. 3). Assume that we are in a step t when we have to simulate an instruction l i : (ADD(r), l j ), with two spikes present in neuron σ li (like σ l0 in the initial configuration) and no spikes in any other neuron, except in those associated with registers. Even if the system is in the second component at the time, it must switch over to the ...

Similar publications

Article
Full-text available
Given a set of $n$ points $P$ in the plane, the first layer $L_1$ of $P$ is formed by the points that appear on $P$'s convex hull. In general, a point belongs to layer $L_i$, if it lies on the convex hull of the set $P \setminus \bigcup_{j<i}\{L_j\}$. The \emph{convex layers problem} is to compute the convex layers $L_i$. Existing algorithms for th...
Article
Full-text available
In this paper, we discuss the algorithm engineering aspects of an O(n^2)-time algorithm [6] for computing a minimum-area convex polygon that intersects a set of n isothetic line segments.
Preprint
Full-text available
For each stratum of the space of translation surfaces, we introduce an infinite translation surface containing in an appropriate manner a copy of every translation surface of the stratum. Given a translation surface (X,ω) in the stratum, a matrix is in its Veech group SL(X,ω) if and only if an associated affine automorphism of the infinite surface...
Article
Full-text available
In recent years, arrays of extracellular electrodes have been developed and manufactured to record simultaneously from hundreds of electrodes packed with a high density. These recordings should allow neuroscientists to reconstruct the individual activity of the neurons spiking in the vicinity of these electrodes, with the help of signal processing...

Citations

... It should be noted that most of these variants were proven to be Turing universal as number accepting/generating devices. On the other hand, other authors have proposed universal register machines to compute functions by using advanced variants of the SN P systems, such as request rules (Song & Pan, 2016), communication on request (Pan, Paun, Zhang, & Neri, 2017b;Wu, Bilbie, Paun, Pan, & Neri, 2018), astrocytes (Kong, Jiang, Chen, & Xu, 2014), cooperating rules (Metta, Raghuraman, & Krithivasan, 2014), anti-spikes (Song, Jiang, Shi, & Zeng, 2013a), dendrite (Peng et al., 2020), and local synchronization (Song, Pan, & Paun, 2013b). In both applications (number accepting/generating devices and register machines), the authors have intended to minimize the number of neurons to create compact SN P systems. ...
... As can be observed from Table 10, several authors have made intensive efforts to propose advanced register machines by using the minimum number of neurons and spiking rules per neuron. Here, our proposal uses fewer number of neurons with standard spiking rules in comparison with existing approaches (Kong et al., 2014;Metta et al., 2014;Pan et al., 2017b;Paun & Paun, 2007;Peng et al., 2020;Song et al., 2013a;Song & Pan, 2016;Song et al., 2013b;Wu et al., 2018;Zhang, Zeng, & Pan, 2008). In addition, our approach requires at the maximum two spiking rules per each neuron. ...
Article
In spiking neural P (SN P) systems, neurons are interconnected by means of synapses, and they use spikes to communicate with each other. However, in biology, the complex structure of dendritic tree is also an important part in the communication scheme between neurons since these structures are linked to advanced neural process such as learning and memory formation. In this work, we present a new variant of the SN P systems inspired by diverse dendrite and axon phenomena such as dendritic feedback, dendritic trunk, dendritic delays and axonal delays, respectively. This new variant is referred to as a spiking neural P system with dendritic and axonal computation (DACSN P system). Specifically, we include experimentally proven biological features in the current SN P systems to reduce the computational complexity of the soma by providing it with stable firing patterns through dendritic delays, dendritic feedback and axonal delays. As a consequence, the proposed DACSN P systems use the minimum number of synapses and neurons with simple and homogeneous standard spiking rules. Here, we study the computational capabilities of a DACSN P system. In particular, we prove that DACSN P systems with dendritic and axonal behavior are universal as both number-accepting/generating devices. In addition, we constructed a small universal SN P system using 39 neurons with standard spiking rules to compute any Turing computable function.
Chapter
Full-text available
Spiking neural P systems, namely SN P systems, are a class of distributed and parallel neural-like computation models, inspired from the way neurons communicate by means of spikes. It has been shown that SN P systems have powerful computation capability and significant potential in real-life applications, and they have received more and more attraction from the scientific community. This paper firstly introduces the formal definition of standard SN P systems and some notions which are often used in this field. Then, the theoretical results about the computation power and efficiency of SN P systems are surveyed. The applications of SN P systems are recalled by summarizing the literature about the real-life applications of SN P systems. Finally, some hot topics and further research lines on SN P systems are provided.
Conference Paper
This paper is an attempt to relax the condition of using the rules in a maximally parallel manner in the framework of spiking neural P systems with exhaustive use of rules. To this aim, we consider the minimal parallelism of using rules: if one rule associated with a neuron can be used, then the rule must be used at least once (but we do not care how many times). In this framework, we study the computational power of our systems as number generating devices. Weak as it might look, this minimal parallelism still leads to universality, even when we eliminate the delay between firing and spiking and the forgetting rules at the same time.
Conference Paper
Genetic algorithm is a well known bio-inspired algorithm, which has been widely used to solve practical problems in real-life. The performance of the algorithm heavily depends on the convergence related to the values of parameters involved. It is formulated as a hard problem to select suitable values of mutation and crossover rates to achieve fast or slow convergence for unknown problems. As a new study of system framework inspired by cell model, membrane computing models is with a membrane structure having region segmentation, intrinsic discrete, non-deterministic, programmable and transparent features. In this paper, a hybrid “fast-slow” convergent framework for genetic algorithm inspired by membrane computing is proposed and applied to search optimal solution of 41 benchmark functions. It is obtained by the data experimental results that our method performs well in solving benchmark functions by achieving accuracy rate about 96%.
Article
Spiking neural P systems, shortly called SN P systems, are a class of distributed and parallel neural-like computing models, inspired from the way of neurons spiking and communicating with each other by means of spikes. In this work, we propose a new variant of SN P systems, called SN P systems with request rules. In such a system, besides spiking and forgetting rules, a neuron can have request rules, with which the neuron can sense “stimulus” from the environment by receiving a certain number of spikes. We investigate the computation power of SN P systems with request rules. It is obtained that such systems are Turing universal, even with a small number of neurons. Specifically, (i) SN P systems with request rules having 4 neurons can compute any set of Turing computable natural numbers; (ii) with 47 neurons, such systems can compute any Turing computable function.