ArticlePDF Available

Biologically Plausible Information Propagation in a CMOS Integrate-and-Fire Artificial Neuron Circuit with Memristive Synapses

IOP Publishing
Nano Futures
Authors:

Abstract and Figures

Neuromorphic circuits based on spikes are currently envisioned as a viable option to achieve brain-like computation capabilities in specific electronic implementations while limiting power dissipation given their ability to mimic energy efficient bio-inspired mechanisms. While several network architectures have been developed to embed in hardware the bio-inspired learning rules found in the biological brain, such as the Spike Timing Dependent Plasticity, it is still unclear if hardware spiking neural network architectures can handle and transfer information akin to biological networks. In this work, we investigate the analogies between an artificial neuron combining memristor synapses and rate-based learning rule with biological neuron response in terms of information propagation from a theoretical perspective. Bio-inspired experiments have been reproduced by linking the biological probability of release with the artificial synapses conductance. Mutual information and surprise have been chosen as metrics to evidence how, for different values of synaptic weights, an artificial neuron allows to develop a reliable and biological resembling neural network in terms of information propagation and analysis.
This content is subject to copyright. Terms and conditions apply.
Nano Futures 7(2023) 025003 https://doi.org/10.1088/2399-1984/accf53
OPEN ACCESS
RECEIVED
30 December 2022
REVISED
19 April 2023
ACC EPT ED FOR PUB LICATI ON
21 April 2023
PUBLISHED
26 May 2023
Original content from
this work may be used
under the terms of the
Creative Commons
Attribution 4.0 licence.
Any further distribution
of this work must
maintain attribution to
the author(s) and the title
of the work, journal
citation and DOI.
PAPER
Biologically plausible information propagation in a
complementary metal-oxide semiconductor integrate-and-re
articial neuron circuit with memristive synapses
Lorenzo Benatti1, Tommaso Zanotti1, Daniela Gandolfi2,3, Jonathan Mapelli2,3
and Francesco Maria Puglisi1,3,
1Dipartimento di Ingegneria ‘Enzo Ferrari’, Via. P. Vivarelli 10/1, 41125 Modena, Italy
2Dipartimento di Scienze Biomediche, Metaboliche e Neuroscienze, Via G. Campi 287, 41125 Modena, Italy
3Centro Interdipartimentale di Neuroscienze e Neurotecnologie Universit`
a degli Studi di Modena e Reggio Emilia, 41125 Modena, Italy
Author to whom any correspondence should be addressed.
E-mail: francescomaria.puglisi@unimore.it
Keywords: CMOS, neuromorphic, mutual information, memristors, spiking neural network, artificial neural network
Abstract
Neuromorphic circuits based on spikes are currently envisioned as a viable option to achieve
brain-like computation capabilities in specific electronic implementations while limiting power
dissipation given their ability to mimic energy-efficient bioinspired mechanisms. While several
network architectures have been developed to embed in hardware the bioinspired learning rules
found in the biological brain, such as spike timing-dependent plasticity, it is still unclear if
hardware spiking neural network architectures can handle and transfer information akin to
biological networks. In this work, we investigate the analogies between an artificial neuron
combining memristor synapses and rate-based learning rule with biological neuron response in
terms of information propagation from a theoretical perspective. Bioinspired experiments have
been reproduced by linking the biological probability of release with the artificial synapse
conductance. Mutual information and surprise have been chosen as metrics to evidence how, for
different values of synaptic weights, an artificial neuron allows to develop a reliable and biological
resembling neural network in terms of information propagation and analysis.
1. Introduction
Neuromorphic technologies have been designed to support large-scale spiking neural networks (SNNs)
encompassing bioinspired mechanisms. Unlike conventional artificial intelligence systems, these networks
base their activity on the transfer of binary units (spikes) through synaptic contacts. These latter in turn can
undergo persistent changes of their strength upon specific patterns of stimulation. The modifications
expressed by synaptic contacts following the induction of long-term plasticity [1,2] can be reliably
reproduced by memristors [35], which can be designed to change their conductance according to their past
activity [6]. In this respect, recent advancements in memristive device technology development have brought
them closer to full integration in standard complementary metal-oxide semiconductor (CMOS) platforms,
which is per se a tough challenge, as these devices must fulfill very stringent requirements for integration
with current integrated circuits. Among these requirements are integration densities of up to
1 gigabyte mm2, writing voltages <3 V, switching energy <10 pJ, switching time <10 ns, writing endurance
>1010 cycles (or full potentiation/depression cycles), dynamic range >10, and low conductance fluctuations
over time if no bias is applied (<10% for >10 years) [7]. Notably, some memristive devices have fulfilled
such stringent criteria [7,8], but they still exhibit high manufacturing costs despite the simplicity of
individual memristive cells due to the need for additional elements (series transistor, selector, or resistor) and
to specific beyond-CMOS back-end-of-line interconnects. Still, these devices are gradually triggering the
interest of the semiconductor industry and are currently considered front-runners in the race to realize a
CMOS-compatible cost-effective synaptic element for hardware SNNs.
© 2023 The Author(s). Published by IOP Publishing Ltd
Nano Futures 7(2023) 025003 L Benatti et al
Interestingly, several network architectures have been developed to embed in hardware the bioinspired
learning rules that are required to exploit SNN functionalities, such as spike timing-dependent plasticity,
rate-based plasticity, and the Bienenstock–Cooper–Munro learning rule [6]. Nevertheless, while a significant
amount of work has been published in this domain, it is still unclear if currently proposed neuromorphic
hardware architectures for SNNs have the capacity to handle and transfer information in a way that
resembles what happens in the corresponding biological networks. In a neuronal microcircuit, in fact, the
information is exchanged between neurons in the form of input spike series that are conveyed as output
temporal spike series. Therefore, the amount of transferred information can be estimated by looking at the
input–output relationship, which is computed by analyzing the stimulus patterns and the neural responses.
This dependency can be formalized in several ways, like tuning [9,10], gain [11], and selectivity curves
[12,13], and these approaches allow us to provide quantitative estimates of the information content
independently from the neural code. The language employed by neurons to communicate can be cracked by
adopting parameters of communication and information theory [14]. Among these, mutual information
(MI) has been already adapted to neuroscience to estimate the information transmitted by circuits [15],
neurons [16], or single synapses [17] without specific knowledge of neural code semantics. MI is directly
derived from the response and noise entropy [18], which are correlated to the variability of responses to
different inputs or to the same input. Given this premise, the calculation of MI allows us to evaluate the
capacity of a neuronal system to separate different inputs, and thus transmitting information [19,20]. For
this reason, MI has been used consistently in neuroscience to show the modalities of information
propagation in biological neural networks. On the other hand, much of the effort in neuromorphic
electronics has been devoted to the design, development, and implementation of artificial CMOS [21] or
memristive [22] neurons and either CMOS or memristive synapses [2325] in circuits that embed specific
learning rules and electrophysiological properties, paying less or no attention to the overall performance of
the system under investigation from the standpoint of information transmission. In fact, understanding
whether the currently proposed artificial SNNs can at least qualitatively replicate the extreme efficiency with
which biological networks handle and transfer information represents an important step toward the
development of brain-inspired and ultra-low-power artificial processing systems.
In this work, we provide for the first time an in-depth analysis of the information transmission in an SNN
that encompasses CMOS leaky integrate-and-fire (LIF) neurons and memristive synapses by focusing on
how MI is transmitted through the network. Specifically, we focus on a simplified network, with the neuron
circuit mimicking a cerebellar granule cell (GC), found to be the optimal benchmark to calculate MI [26]. In
fact, in the case of the cerebellar GC, the input–output combination is particularly convenient, given the
small number of dendrites (four) and the limited amount of inputs received, compared to the thousands of
contacts received, for instance, by cortical and hippocampal pyramidal neurons. In addition, besides the few
dendrites, GCs when activated respond with a limited number of spikes (typically two or less [27]) confined
in a narrow time window regulated by synaptic inhibition [28]. This peculiarity reduces the complexity of
calculations and suggests the use of this microcircuit as a model to investigate changes in the transmission
properties by internal or external agents. We compare how MI changes not only with specific stimuli and
input patterns, but also how it evolves with changes induced by altering synaptic strength (i.e. upon
learning), finding a striking qualitative resemblance with the results found experimentally in biological
networks [29,30] and in simulations with biologically realistic neurons [26]. This paper is organized as
follows. In section 2, we illustrate the methods used to compute specific quantities related to information
propagation. In section 3, we report the details of the synaptic device used in this study, clarifying how the
latter were characterized and modeled; the details of the electronic neuron model are given as well. In
section 4, we provide the details of the proposed artificial network and of the analogies and differences as
compared to its biological counterpart. In section 5, results are reported and discussed. Conclusions follow.
2. Methods
Information theory has been extensively used in neuroscience to estimate the amount of information
transmitted within neuronal circuits [14,16,20,26], where a set of input stimuli can be correlated with
output responses to estimate the information conveyed by neurons. The level of correlation primarily
depends on the input variability, which, in turn, is expanded by the number of afferent fibers. In the central
nervous system, there is a large variability in the number of input synapses a neuron can receive: from a few
units (cerebellar GCs [29]) to hundreds of thousands (200000 in cerebellar Purkinje cells [30]). In terms of
information transfer, only neurons with limited fan-in connections can be efficiently analyzed, avoiding the
explosion of combinations according to the input space size. Following our recent work [31], we have
simulated an artificial architecture composed of individual neurons with only four synaptic inputs, and the
level of correlation was estimated by first dividing neuronal responses in temporal bins, which were digitized
2
Nano Futures 7(2023) 025003 L Benatti et al
in dependence on the presence of a spike. This discretization allowed to convert a spike train into a binary
word of length N=T/t(where tis the temporal bin and Tis the spike train duration) containing only
digital labels (where ‘0’ means no spike and ‘1’ means spike). Neurons respond to input stimuli with a variety
of binary words generating neuronal vocabulary that can be explored by varying input stimuli. The larger the
vocabulary is, the richer is the conveyed information. However, efficient communication is ensured by a
correlation between input stimuli and output words. In information theory, two factors determine the
amount of information a neuron conveys about its inputs, namely the response entropy (i.e. the neuronal
vocabulary size) and the noise entropy (i.e. the reliability of responses when stimuli are given). The quantity
that considers these two factors simultaneously by subtracting the noise entropy from the response entropy is
Shannon MI, which is measured in bits and can be calculated through the following equation:
MI(R,S) = MI (S,R) = H(R)H(R|S) =
sS
rR
p(s)p(r|s)log2
p(r|s)
p(r)(1)
where rand sare the response and the stimulus pattern, respectively; p(r) and p(s) are the probabilities that r
and soccur within a single acquisition. Finally, p(r|s) is the probability of obtaining the response pattern r
given the stimulus pattern s.
MI is intrinsically an average property of all inputs, and it can be interesting to decompose MI into a
single stimulus contribution (stimulus specific surprise (SSS)) or even a single spike contribution (surprise
per spike (SpS)). These two quantities can be computed as:
I(s) = SSS =
r
p(r|s)log2
p(r|s)
p(r)(2)
Iper Spike (s) = SpS =I(s)
stimulus spike count.(3)
Experimental issues associated to high data dimensionality limit the estimation of all the probabilities in
the MI formula. Estimating the conditional entropy requires determining the response probability given any
input stimulus. If the neural response shows sufficiently low variability, the response probability can be
assessed with a tractable amount of data [26,31].
3. Synaptic devices and neuron model
3.1. Synaptic devices and experiments
According to the configuration adopted for biological experiments [31] and simulations with biologically
realistic neurons [26], we investigated how information is propagated in a cerebellar GC-like artificial CMOS
neuron with four memristor-based synaptic inputs. In this respect, we run circuit simulations using Cadence
Virtuoso software, in which the response of the artificial CMOS neuron was abstracted by using a Verilog-A
behavioral description of its constituent building blocks (as specified in section 3.2), while the characteristics
of the memristive elements (i.e. the artificial synapses) were carefully reproduced by means of a compact
model developed internally, i.e. the UNIMORE resistive random access memory (RRAM) compact model
[32]. The latter is a physics-based compact model supported by the results of advanced multiscale
simulations [33] that has been shown to reproduce both the quasi-static and dynamic behavior of different
memristor technologies with a single set of parameters [34] and considers the intrinsic device stochastic
response, thermal effects, and random telegraph noise [35]. Specifically, the memristive elements adopted in
this study are commercially available C-doped self-directed channel (SDC) memristors by Knowm [36],
available in a dual in-line package. These devices were chosen because, to the best of the authors’ knowledge,
they are the only commercially available packaged RRAM devices to date. This choice allows us to show that
MI propagates through an SNN with CMOS LIF neurons and memristive synapses akin to what happens in
biological networks, and that such behavior can be achieved with available commercial-grade RRAM devices,
requiring no specific advancements in technology development.
As shown in figure 1(a), the SDC memristor consists of a stack composed of W/Ge2Se3/Ag/Ge2Se3/SnSe/
Ge2Se3:C/W, where Ge2Se3:C is the active layer [36]. During fabrication, the three layers below the top
electrode are mixed and form the Ag source [36]. The SnSe layer acts as a barrier to avoid Ag saturation in the
active layer and is responsible for the production of Sn ions and their migration into the active layer during
the initial operation of the device (typically addressed as ‘forming’), which promotes Ag agglomeration at
specific sites [36]. The details of the mechanism at the basis of the resistive switching in these devices are
available in [36]. To fully capture the behavior of these devices in circuit simulations, we carefully calibrated
the parameters of the UNIMORE RRAM compact model against experimental data, as elucidated in figure 1.
3
Nano Futures 7(2023) 025003 L Benatti et al
Figure 1. (a) Sectional schematic of the Knowm C-doped SDC memristor. (b) Experimental (red) and modeled (black) IV
characteristic, composed by a set (V>0) and reset (V<0) mechanism. (c) Pulse waveforms used to potentiate and depress the
memristor, including a 50 ms read pulse. (d) Experimental (symbols) and simulated (solid line) pulsed response of the SDC
memristor when subject to sequences of potentiation (red circles) and depression (blue squares) pulses. The device is initially
driven in LRS by means of 20 initial set rectangular pulses. The resistance read after each pulse by means of a read pulse is
computed as (VREAD/I)Rs(Rs=10 kseries resistance). (e) The IVand pulsed characteristics of (b) and (d) are reproduced
by modulating the CF barrier (x) of an equivalent oxide RRAM with an oxide thickness tox =40 nm.
The electrical measurements were performed using the Keithley 4200-SCS. To analyze and then model the
behavior of the memristors, we performed a sequence of quasi-static IVmeasurements by applying voltages
sweeping from 0.8 to 0.4 V with a current compliance enforced to 10 µA by the Keithley 4200-SCS. These
measurements drive the device to a low resistive state (LRS) with a SET operation (V>0) and to a high
resistive state (HRS) with an ensuing RESET operation (V<0). Results are shown in figure 1(b) (red traces)
and reveal that the RESET curves are characterized by an abrupt transition from the LRS to the HRS and a
strong cycle-to-cycle variability of the switching voltage, while the SET operation is associated with a more
predictable and gradual transition from HRS to LRS. Then, to experimentally evaluate the synaptic
functionality of the memristors (i.e. the capability to respond to spike-like voltage stimuli rather than to
quasi-static voltage sweeps), we designed a suitable pulsed voltage sequence (figure 1(c)), which gradually
drives the device resistance toward higher or lower resistance (or, equivalently, conductance) states. In this
experiment, a 10 kresistor was connected in series with the device to prevent accidental current overshoots
because Keithley 4200-SCS does not support the enforcement of current compliance when performing
pulsed tests. (The series resistor can then be removed in the actual circuit implementation.) The device was
initially driven in LRS by means of 20 rectangular pulses (V=0.6 V; T=100 µs, initial set). Then, long-term
depression (LTD) and long-term potentiation (LTP) were obtained by applying trains of 20 depression pulses
(V=0.2 V; T=10 µs) followed by 20 potentiation pulses (V=0.55 V; T=30 µs). To evaluate the
transition smoothness, each potentiation or depression pulse is followed by a small reading pulse
(VREAD =50 mV; TREAD =50 µs) that is used to retrieve the evolution of the resistance values during LTD
and LTP. Figure 1(c) reports the resistance evolution for 15 identical depression–potentiation cycles,
revealing that a smooth and reproducible synaptic analog behavior is achievable with these devices.
Although SDC memristors are ion-conducting devices that change their resistance due to the movement
of Ag+ions into the device structure [36], their behavior is well replicated (figures 1(b) and (d), black traces)
by the modulation of an equivalent conducting filament (CF) barrier (figure 1(e)) [3235], which is the
typical behavior of filamentary memristive devices. The barrier thickness (xin figure 1(e)) is in fact directly
correlated to the memristor conductance, which represents the synaptic strength. Further details of the
compact model and the extracted parameters for this technology are reported in [6].
3.2. Neuron model and simulations
Figure 2(a) shows the model of the LIF neuron [6] supporting a rate-dependent plasticity rule on the
synaptic memristive devices that was designed and simulated in this work. In this neuron model, the input
terminal (see neuron input in figure 2(a)) is kept at virtual ground, and input spikes from presynaptic
neurons are integrated into a capacitor. When the voltage across the capacitor passes a threshold, an output
spike is generated at the neuron output, and after a predefined delay (i.e. Tspike delay in figure 2(a)), the
capacitor is discharged to reset the system to its initial state. The rate at which the capacitor charges depends
both on the input spikes’ rate and the presynaptic strength. Because the neuron is leaky, in the absence of
input spikes the capacitor discharges with an appropriate time constant. This feature mirrors the finding that
biological neurons have a leaky membrane, which contributes to depolarization when no spikes are fed at
their inputs. The effect of the time constant value in the LIF model was already evaluated in [37], in which it
is clearly shown that the presence of leakage provides enhanced noise robustness of SNNs while decreasing
4
Nano Futures 7(2023) 025003 L Benatti et al
Dep
Dep
Figure 2. (a) LIF neuron model. (b) The synaptic plasticity is implemented by a rate-based learning rule, where a presynaptic
stimulation rate higher (lower) than ν0leads to the potentiation (depression) of the relative synapse. (c) Spiking diagram for
potentiation and depression.
sparsity in computation. This implies a trade-off between robustness and energy efficiency in SNNs, which
can be optimized by mimicking the dynamics found in biological neurons, thus setting the LIF time constant
close to the values used in biological neuron simulations.
The synaptic plasticity mechanism implemented in the neuron model is purely rate based and depends
only on the rate of the presynaptic stimulation. Thus, a high (low) rate of presynaptic stimulation leads to the
potentiation (depression) of the associated synapse. In the adopted neuron model, this learning rule is
implemented by appropriately designing the shape of the spike (see figure 2(c)), so that each presynaptic
spike results in a small potentiation of the associated synaptic memristor device, and by introducing a back
spike, which results in a small synaptic depression; see figure 2(c). It is worth noting that this back spike only
serves the purpose of properly implementing the rate-based learning rule and is not related to the firing
activity of the postsynaptic neuron and does not appear at its output.
When firing the back spike from its input terminal, the postsynaptic neuron also outputs a Dep and Dep
control signals that are connected to the gate of two MOSFET devices (see figures 2(a) and (c)), which
disconnect the synapses from their presynaptic neurons and connect their bottom electrodes to ground; see
figure 2(c).
When the neuron fires the back spike, the propagation of information from the presynaptic neurons is
therefore temporarily disabled. The occurrence of simultaneous presynaptic spikes and back spikes is
minimized by modeling the time interval between the back spikes as a random variable following a Poisson
distribution (i.e. λ=1 s was used in the simulations) and by designing a back spike with a short duration.
Because the back spikes do not propagate any information, their shape can be designed with some degree of
flexibility and can be adjusted as required. The back spike used in the simulation is shown in figure 2(c) and
has a pulse width of 300 µs, which is much shorter than the presynaptic spike. The mean time interval
between successive back spikes determines the characteristic of the implemented rate-based learning rule. As
shown in figures 2(b), a stimulation rate ν0exists at which potentiation and depression effects balance out,
leading to no average synaptic strength change . Presynaptic stimulation rates higher than ν0result in a net
synapse potentiation, while lower stimulation rates result in a net depression, as shown in figure 2(b).
Although the spikes used in this work were designed to provide a system response in a similar timescale to
that of biological neurons and to be compatible with the employed memristor technology, it is worth noting
that the pulse shape and the parameters of the neuron circuit (e.g. threshold, integrator time constant) can
be scaled appropriately to adapt the system response to satisfy possible application requirements,
highlighting the flexibility of the electronic implementation of the bioinspired neural network.
4. Simulated network
To understand whether the artificial SNN features information transmission properties akin to those found
in biological systems, we implemented a circuit that resembles the GC morphology, shown schematically in
figure 3(a), as well as the spike digitization principle used in [26,31]. GCs are typically studied because they
constitute more than half of the neurons in the brain, and in particular, they present an exceptionally low
number of synapses (four on average) [38,39], which constitutes ground for a slender electronic
implementation. In neuroscience experiments, the stimulation is typically performed by applying spike
5
Nano Futures 7(2023) 025003 L Benatti et al
Figure 3. (a) Example of electrical stimulation of a GC (four inputs and one output) with MF inputs and spike digitization.
(b) Implementation of a biologically plausible network with four input neurons (N1–4) spiking to an output neuron with
memristor-based synapses. The input (output) time is divided into time bins of tin =50ms (tout =10 ms), in which the
spikes are stochastically distributed.
trains through the mossy fibers (MFs) [40], and the experiment time is divided into temporal bins, where the
presence of a spike is coded as logic 1 (0 otherwise). Figure 3(b) reports the schematic of the implemented
artificial neuron, with the related memristor-based synapses emulating the structure on the left. Each
synapse may receive up to four spikes over time, i.e. the experiment time during which the inputs are
delivered to the network is composed of four temporal bins. In [26,31], time bins of 10 and 6 ms have been
used to stimulate the input and digitize the output train, respectively. Due to the technological constraints of
our memristors and design choices, spikes with a longer duration are needed, which lead to define time bins
of 50 and 10 ms for input and output, respectively, however with no loss of generality. As in [31,36], we used
four time bins on four inputs for the stimulation (as in figure 3(b)), giving 2Nbin·Ninput =24·4=65536 possible
input combinations and with a total stimulation period of 50 ms ×4=200 ms. Spike stimulations (10 ms
long) are applied at random time (jitter) within each time bin (50 ms long), consistently with the idea of
digitization of random input spike trains and with the need to introduce in an otherwise deterministic
artificial network the stochastic features observed in the biological counterpart. Specifically, in vitro
experiments on GCs consist in applying specific (coded) stimuli through the neuron’s MFs and repeating the
experiments several times for each stimulus to sample the intrinsic neuron variability, which leads to a
stochastic output. To reproduce the same stochastic response of the neuron using a deterministic neuron
model, in circuit simulations each stimulus is delivered with a Poisson distributed delay (jitter) inside each
time bin, therefore mimicking that the stimuli are provided by four other presynaptic neurons.
In this study, we aimed at quantifying how theoretical quantities related to information propagation
through the network are affected by learning (i.e. plasticity on synaptic weights) to confirm analogies
between biological and artificial frameworks. To do so, it is imperative to understand properly how synaptic
strength is represented in the two frameworks. Indeed, although in biological experiments the synaptic
efficacy (weight) is measured in terms of release probability p, a stochastic parameter quantifying the
synaptic strength [26], in memristor-based neuromorphic networks the weight is represented by the
memristor conductance. In fact, in biological synapses, the efficacy is increased (decreased) by applying LTP
(LTD) theta burst trains, which, in our approach, will result in an increased (decreased) memristor
conductance. Therefore, in this study, we simulated the system for different values of memristor
conductance, and we looked at how MI, SSS, and SpS are affected by synaptic plasticity. To keep consistency
with what was observed in [26], in which the release probability was equal for all four MF synapses at the GC
(i.e. any permutation of the four inputs was equivalent), we reduced the number of different stimuli from
65 536 (all possible input combinations) to 3876, making the four synaptic inputs equal. Also, to repeat
simulations for different values of memristor conductance while trying to follow bioinspired protocols, the
memristor conductance values used in simulations (and related to p) have been increased in-between
consecutive simulations by applying specifically designed LTP theta burst (as depicted in figure 4(a)) and
consequently updating the synaptic weights based on the conductance variations (figure 4(b)). For each
synaptic weight, the 3876 stimuli are then delivered 15 times through the inputs and ranked on the basis of
their relative SpS. As the goal of this study was to understand how information propagates through an
artificial network when synaptic strength values are in a fixed configuration, we excluded possible influence
of the synaptic weight variations induced by the input transmission by disabling the plasticity mechanism in
the Verilog-A model of the memristor during the application of input spike trains, leaving it enabled only
during the application of the theta bursts in figure 4. This implies that the results concerning the information
transfer analysis, reported in the following section, can be considered valid regardless of the specific device
employed to represent the synaptic weights. Naturally, such a device needs to show potentiation and
depression capabilities to fulfill the role of a synaptic element in an SNN. Nevertheless, information
6
Nano Futures 7(2023) 025003 L Benatti et al
Figure 4. (a) Plasticity is simulated by a potentiation theta burst train, leading to a conductivity change (b), which is then used to
calibrate the synaptic weights of the network. In (a), the LTP theta burst sequence is reported with progressive zoom levels. In (b),
a normalized weight change over time is reported when the pattern in (a) is delivered.
propagation through the network will show the features discussed in the following regardless of the
technological specifications and of the peculiar learning features of the synaptic device.
5. Results and discussion
It is now possible to look at the effect of synaptic plasticity on a spiking neuron with memristive synapses by
analyzing the key quantities related to information transfer, such as entropy, MI, and surprise when
stimulating the network, as described in section 4.
5.1. Information transfer analysis
Shannon MI provides a mathematical framework to quantify the amount of information transmitted by a
neuron during neural stimulation. Because our aim was to investigate whether the envisaged neuromorphic
architecture could reliably reproduce neuronal performances, we explored the dependencies of MI on
synaptic efficacy (i.e. memristor conductance). In analogy with [26,31], the reduced number of inputs
allowed us to calculate MI, and, as explained in section 2and shown in figure 3, we have first digitized spike
trains, and then a controlled set of stimuli Swas chosen. Second, responses rwere detected when stimuli with
known a priori probabilities p(s) were repeatedly presented. Once all the data were collected, the
corresponding joint probabilities p(r|s) and the probability distribution of responses averaged over the
stimuli p(r) were estimated. MI was computed with equation (1), and because in biological systems its value
has been shown to change according to variations of the release probability [26], we investigated the
relationship between MI and the memristor conductance, which in our assumption was the equivalent of the
synaptic efficacy. Figure 5shows the correlation between the calculated MI and the memristor conductance
values. The overall information transfer is enhanced upon an increase in synaptic efficacy (memristor
conductance) in accordance with the expectations. Furthermore, at visual inspection, the dependence of MI
on the memristor conductance revealed a good correlation with the corresponding p–MI curve (p; release
probability) obtained with both real biological and simulated neurons (see figures 2(b) and 3(b) in [26]).
These results, besides the demonstration of the validity of this approach in mimicking neuronal information
transfer, also indirectly support the close relationship between the release probability (¯
p)and memristive
conductance. We were also interested in identifying stimuli that were best encoded by the electronic neuron.
The stimulus specific contribution to the MI (SSS; equation (2)) and the SpS (equation (3)) were therefore
computed, allowing to identify the most informative set of stimuli (figure 6). Specifically, for a given value of
synaptic strength, the network was stimulated with the 3876 inputs, and the latter were then ranked by
descending SpS (black curves in figure 6). The same procedure has been replicated for different memristor
conductance values (2.6, 6.2, and 13.6 µS), and the results are reported in figures 6(b) and (c). We initially
focused on the results obtained when using the lowest memristor conductance value. As shown in
figure 6(c), we identified the stimuli with the highest and lowest SpS, respectively (i.e. the blue and red
markers on the bottom black curve in figure 6(c)). We then tracked how these specific stimuli changed their
ranking when the simulations were repeated after synaptic potentiation (achieved by means of theta bursts,
as explained in section 4). Both markers moved in qualitative agreement with what was reported in [26],
which is conveniently reported also in figure 6(a). Although the stimuli with the highest and lowest SpS at
7
Nano Futures 7(2023) 025003 L Benatti et al
Figure 5. MI (black), output entropy (Sout, magenta), and noise entropy (Snoise, yellow) versus memristor conductance.
Figure 6. (a) SpS as a function of release probability (¯
p), adapted from [26]. The stimuli were ranked as a function of their SpS for
every ¯
p=0.1–0.4–0.8. Blue and red input stimuli, consisting in the maximum and minimum SpS for the most depressed network
(¯
p=0.1), are tracked for different p. The same experiment is reproduced in our network exploiting different synapse
conductance values and tracking (b) the same stimuli as in (a) and (c) using the same approach as in (a), revealing similar trends.
the lowest memristor conductance were not coincident with those found in [26] at the lowest ¯
pvalue, the
qualitative trend was found to be the same. In addition, we verified that the same trend is obtained when
tracking exactly these stimuli in our simulations, as reported in figure 6(b). Both cases confirmed the
expected trends, revealing the dependability of an artificial memristor-based neuro-synaptic circuit in
quantifying the information content of a specific spike train given a determined network strength.
5.2. Discussion
The results shown thus far confirm the similarities between a neuromorphic microcircuit composed by a
neuron with a limited number of synapses endowed with a rate-based learning rule and its biological
counterpart. Bioinspired experiments have been reproduced by assuming a dependency between the release
probability and the conductance of an electronic synapse. Three parameters, namely MI, SSS, and SpS, have
been computed for different synaptic weights, with the aim to analyze the ability of the implemented neuron
to retrieve and quantify information content from a specific stimulus based on the synaptic strength and
input sparseness. Despite the differences between the neuromorphic and the biological neuron, like (a) the
noise/variability level, (b) the stochastic mechanisms underlying the opening of ion channels, or (c) the
stochastic processes involved in the neurotransmitter release process, these results demonstrate from an
information transmission perspective that artificial neurons can be adopted as elements performing complex
computational tasks, such as those performed by biological neurons. This proof of principle is a first
milestone in the development of advanced neuronal networks with performance compatible with brain
circuits, given their capability to compute sparse and temporally uncorrelated information. Furthermore,
differently from conventional hardware, neuromorphic electronic circuits [41] can be designed to operate
with a limited power consumption in multiple time domains according to circuit architectures. These
advantages, deriving from an electronic implementation of biologically plausible SNNs (e.g. multiple
timescales and reduced area and power consumption), prove remarkably useful for different applications.
8
Nano Futures 7(2023) 025003 L Benatti et al
6. Conclusions
In this work, we investigated the analogies between an artificial neuron combining memristor synapses and
rate-based learning rule with biological neuron response in terms of information propagation from a
theoretical perspective. Bioinspired experiments have been reproduced by linking the biological probability
of release ¯
pwith the artificial synapse conductance. MI, SSS, and SpS have been computed for different
synaptic weights, with the aim of analyzing the ability of the implemented neuron to retrieve and quantify
information content from a specific stimulus based on the synaptic strength. The results highlight that an
artificial neuron allows to develop a reliable and biological resembling neural network in terms of
information analysis. Advantages deriving from an electronic implementation (e.g. timescale and area)
provide a remarkably useful tool for different applications.
Data availability statement
The data cannot be made publicly available upon publication because they are not available in a format that
is sufficiently accessible or reusable by other researchers. The data that support the findings of this study are
available upon reasonable request from the authors.
Acknowledgments
The authors would like to thank Mr Davide Florini for tangible support, early developments, and fruitful
discussions. This work has been partially funded under the National Recovery and Resilience Plan, Mission
04 Component 2 Investment 1.5—NextGenerationEU, Call for Tender No. 3277 dated 30 December 2021,
Award Number: 0001052 dated 23 June 2022.
ORCID iDs
Tommaso Zanotti https://orcid.org/0000-0002-4145-8830
Jonathan Mapelli https://orcid.org/0000-0002-0381-1576
Francesco Maria Puglisi https://orcid.org/0000-0001-6178-2614
References
[1] Mapelli J, Gandolfi D, Vilella A, Zoli M and Bigiani A 2016 Heterosynaptic GABAergic plasticity bidirectionally driven by the
activity of pre- and postsynaptic NMDA receptors Proc. Natl Acad. Sci. USA 113 9898–903
[2] Gandolfi D, Bigiani A, Porro C A and Mapelli J 2020 Inhibitory plasticity: from molecules to computation and beyond Int. J. Mol.
Sci. 21 1805
[3] Wu C-W et al 2021 Realizing forming-free characteristic by doping Ag into HfO2-based RRAM Appl. Phys. Express 14 041008
[4] Park J, Ryu H and Kim S 2021 Nonideal resistive and synaptic characteristics in Ag/ZnO/TiN device for neuromorphic system Sci.
Rep. 11 16601
[5] Covi E, Wang W, Lin Y-H, Farronato M, Ambrosi E and Ielmini D 2021 Switching dynamics of Ag-based filamentary volatile
resistive switching devices—part I: experimental characterization IEEE Trans. Electron Devices 68 4335–41
[6] Florini D, Gandolfi D, Mapelli J, Benatti L, Pavan P and Puglisi F M 2022 A hybrid CMOS-memristor spiking neural network
supporting multiple learning rules IEEE Trans. Neural Netw. Learn. Syst. 68 1–13
[7] Lanza M, Sebastian A, Lu W D, Le Gallo M, Chang M-F, Akinwande D, Puglisi F M, Alshareef H N, Liu M and Roldan J B 2022
Memristive technologies for data storage, computation, encryption, and radio-frequency communication Science 376 eabj9979
[8] Banerjee W 2020 Challenges and applications of emerging nonvolatile memory devices Electronics 91029
[9] Butts D A and Goldman M S 2006 Tuning curves, neuronal variability, and sensory coding PLoS Biol. 4e92
[10] Kang K, Shapley R M and Sompolinsky H 2004 Information tuning of populations of neurons in primary visual cortex J. Neurosci.
24 3726–35
[11] Mapelli J, Boiani G M, D’Angelo E, Bigiani A and Gandolfi D 2022 Long-term synaptic plasticity tunes the gain of information
channels through the cerebellum granular layer Biomedicines 10 3185
[12] Barak O, Rigotti M and Fusi S 2013 The sparseness of mixed selectivity neurons controls the generalization-discrimination
trade-off J. Neurosci. 33 3844–56
[13] Rigotti M, Barak O, Warden M R, Wang X J, Daw N D, Miller E K and Fusi S 2013 The importance of mixed selectivity in complex
cognitive tasks Nature 497 585–90
[14] Abbott L F and Dayan P 2001 Theoretical Neuroscience Computational and Mathematical Modeling of Neural Systems (Cambridge,
MA: The MIT Press)
[15] Quian Quiroga R and Panzeri S 2009 Extracting information from neuronal populations: information theory and decoding
approaches Nat. Rev. Neurosci. 10 173–85
[16] Bialek W, Rieke F, de Ruyter van Steveninck R R and Warland D 1991 Reading a neural code Science 252 1854–7
[17] London M, Schreibman A, Häusser M, Larkum M E and Segev I 2002 The information efficacy of a synapse Nat. Neurosci. 5332–40
[18] Shannon C 1948 The mathematical theory of communication Bell Syst. Tech. J. 27 379–423
[19] Borst A and Theunissen F E 1999 Information theory and neural coding Nat. Neurosci. 2947–57
[20] Panzeri S, Senatore R, Montemurro M A and Petersen R S 2007 Correcting for the sampling bias problem in spike train
information measures J. Neurophysiol. 98 1064–72
9
Nano Futures 7(2023) 025003 L Benatti et al
[21] Joo B, Han J-W and Kong B-S 2022 Energy- and area-efficient CMOS synapse and neuron for spiking neural networks with STDP
learning IEEE Trans. Circuits Syst. I69 3632–42
[22] Guo T, Pan K, Sun B, Wei L, Yan Y, Zhou Y N and Wu Y A 2021 Adjustable leaky-integrate-and-fire neurons based on
memristor-coupled capacitors Mater. Today Adv. 12 100192
[23] John R A et al 2022 Reconfigurable halide perovskite nanocrystal memristors for neuromorphic computing Nat. Commun. 13 2074
[24] Kumar S, Wang X, Strachan J P, Yang Y and Lu W D 2022 Dynamical memristors for higher-complexity neuromorphic computing
Nat. Rev. Mater. 7575–91
[25] Anwer S et al 2022 Cobalt oxide nanoparticles embedded in borate matrix: a conduction mode atomic force microscopy approach
to induce nano-memristor switching for neuromorphic applications Appl. Mater. Today 29 101691
[26] Arleo A, Nieus T, Bezzi M, D’Errico A, D’Angelo E and Coenen O J 2010 How synaptic release probability shapes neuronal
transmission: information-theoretic analysis in a cerebellar granule cell Neural Comput. 22 2031–58 (PMID: 20438336)
[27] Casali S, Tognolina M, Gandolfi D, Mapelli J and D’Angelo E 2020 Cellular-resolution mapping uncovers spatial adaptive filtering
at the rat cerebellum input stage Commun. Biol. 3635
[28] Mapelli J, Gandolfi D, Giuliani E, Casali S, Congi L, Barbieri A, D’Angelo E and Bigiani A 2021 The effects of the general anesthetic
sevoflurane on neurotransmission: an experimental and computational study Sci. Rep. 11 4335
[29] Nieus T, Sola E, Mapelli J, Saftenku E, Rossi P and D’Angelo E 2006 LTP regulates burst initiation and frequency at mossy
fiber-granule cell synapses of rat cerebellum: experimental observations and theoretical predictions J. Neurophysiol. 95 686–99
[30] Gandolfi D, Pozzi P, Tognolina M, Chirico G, Mapelli J and D’Angelo E 2014 The spatiotemporal organization of cerebellar
network activity resolved by two-photon imaging of multiple single neurons Front. Cell. Neurosci. 892 PMID: 24782707; PMCID:
PMC3995049
[31] Mapelli J, Gandolfi D, Giuliani E, Prencipe F P, Pellati F, Barbieri A, D’Angelo E and Bigiani A 2015 The effect of desflurane on
neuronal communication at a central synapse PLoS One 10 e0123534
[32] Puglisi F M, Zanotti T and Pavan P 2019 Unimore resistive random access memory (RRAM) Verilog-A model nanoHUB (https://
doi.org/10.21981/15GF-KX29)
[33] Padovani A, Larcher L, Puglisi F M and Pavan P 2017 Multiscale modeling of defect-related phenomena in high-k based logic and
memory devices 2017 IEEE 24th Int. Symp. on the Physical and Failure Analysis of Integrated Circuits (IPFA) pp 1–6
[34] Zanotti T, Pavan P and Puglisi F M 2022 Comprehensive physics-based RRAM compact model including the effect of variability
and multi-level random telegraph noise Microelectron. Eng. 266 111886
[35] Puglisi F M, Zagni N, Larcher L and Pavan P 2018 Random telegraph noise in resistive random access memories: compact
modeling and advanced circuit design IEEE Trans. Electron Devices 65 2964–72
[36] Campbell K A 2017 Self-directed channel memristor for high temperature operation Microelectron. J. 59 10–14
[37] Chowdhury S S, Lee C and Roy K 2021 Towards understanding the effect of leak in spiking neural networks Neurocomputing
464 83–94
[38] Eccles J C, Ito M and Szentagothai J 1967 The Cerebellum as a Neuronal Machine (Berlin: Springer)
[39] Jakab R L and Hámori J 1988 Quantitative morphology and synaptology of cerebellar glomeruli in the rat Anat. Embryol. 179 81–88
[40] Chadderton P, Margrie T W and Häusser M 2004 Integration of quanta in cerebellar granule cells during sensory processing Nature
428 856–60
[41] Gandolfi D, Puglisi F M, Boiani G M, Pagnoni G, Friston K J, D’Angelo E and Mapelli J 2022 Emergence of associative learning in a
neuromorphic inference network J. Neural Eng. 19 036022
10
... Nevertheless, in an abstraction exercise, neurons can be conceptualized as digital devices conveying trains of bits, namely, action potentials or spikes, whose transmission dynamics can be altered by the expression of activity-dependent changes in synaptic strength [11,12]. This reductionist perspective is essential when attempting to quantify the information transfer in neuronal circuits and, more importantly, when integrating biological dynamics into neuromorphic hardware [13,14]. The language employed by neurons to communicate can be analyzed by adopting parameters taken directly from information and communication theory and calculating mutual information (MI). ...
... According to the configuration adopted in the biological experiments ( Fig. 1) and simulations (Figs. 2 and 3), a cerebellar GrC-like artificial complementary metal-oxide-semiconductor (CMOS) neuron with four memristor-based synaptic inputs was implemented to investigate the changes in information transfer induced by long-term plasticity. The circuit simulations were run using Cadence Virtuoso software, as in [14]. Briefly, the responses of the artificial CMOS neuron were abstracted by using a Verilog-A behavioral description of its constituent building blocks, and the properties of the artificial synapses were reproduced using an in-house-developed compact model (the UniMORE RRAM Model [56]). ...
... As described in [14], a CMOS LIF neuron supporting a ratedependent plasticity rule on memristive devices was designed and simulated to investigate changes in MI transfer. In this configuration, the input terminal integrates spikes from presynaptic neurons using a capacitor, included in the "integration box" shown in Fig. 5A. ...
Article
Full-text available
The advent of neuromorphic electronics is increasingly revolutionizing the concept of computation. In the last decade, several studies have shown how materials, architectures, and neuromorphic devices can be leveraged to achieve brain-like computation with limited power consumption and high energy efficiency. Neuromorphic systems have been mainly conceived to support spiking neural networks that embed bioinspired plasticity rules such as spike time-dependent plasticity to potentially support both unsupervised and supervised learning. Despite substantial progress in the field, the information transfer capabilities of biological circuits have not yet been achieved. More importantly, demonstrations of the actual performance of neuromorphic systems in this context have never been presented. In this paper, we report similarities between biological, simulated, and artificially reconstructed microcircuits in terms of information transfer from a computational perspective. Specifically, we extensively analyzed the mutual information transfer at the synapse between mossy fibers and granule cells by measuring the relationship between pre- and post-synaptic variability. We extended this analysis to memristor synapses that embed rate-based learning rules, thus providing quantitative validation for neuromorphic hardware and demonstrating the reliability of brain-inspired applications.
... A single bAP generated at the axonal site serves as the post-reinforcement signal at the pre-sensitized synaptic sites in a pre-post time interval of about 4 ms. For example, calcium action potentials are initiated when they coincide with distal dendritic inputs within a time window of several milliseconds [29]. Our temporal model reproduces bAPs. ...
... We implemented the octopus cell with nine ANF connections in its receptive field. Therefore, it receives nine parallel temporal binned ANF streams [29]. We employed fixed time bins of δt = 22.67 µs because of the common audio sampling rate of 44.1 kHz. ...
Article
Full-text available
A dendrocentric backpropagation spike timing-dependent plasticity learning rule has been derived based on temporal logic for a single octopus neuron. It receives parallel spike trains and collectively adjusts its synaptic weights in the range [0, 1] during training. After the training phase, it spikes in reaction to event signaling input patterns in sensory streams. The learning and switching behavior of the octopus cell has been implemented in field-programmable gate array (FPGA) hardware. The application in an FPGA is described and the proof of concept for its application in hardware that was obtained by feeding it with spike cochleagrams is given; also, it is verified by performing a comparison with the pre-computed standard software simulation results.
Article
Full-text available
A central hypothesis on brain functioning is that long-term potentiation (LTP) and depression (LTD) regulate the signals transfer function by modifying the efficacy of synaptic transmission. In the cerebellum, granule cells have been shown to control the gain of signals transmitted through the mossy fiber pathway by exploiting synaptic inhibition in the glomeruli. However, the way LTP and LTD control signal transformation at the single-cell level in the space, time and frequency domains remains unclear. Here, the impact of LTP and LTD on incoming activity patterns was analyzed by combining patch-clamp recordings in acute cerebellar slices and mathematical modeling. LTP reduced the delay, increased the gain and broadened the frequency bandwidth of mossy fiber burst transmission, while LTD caused opposite changes. These properties, by exploiting NMDA subthreshold integration, emerged from microscopic changes in spike generation in individual granule cells such that LTP anticipated the emission of spikes and increased their number and precision, while LTD sorted the opposite effects. Thus, akin with the expansion recoding process theoretically attributed to the cerebellum granular layer, LTP and LTD could implement selective filtering lines channeling information toward the molecular and Purkinje cell layers for further processing.
Article
Full-text available
Artificial intelligence (AI) is changing the way computing is performed to cope with real-world, ill-defined tasks for which traditional algorithms fail. AI requires significant memory access, thus running into the von Neumann bottleneck when implemented in standard computing platforms. In this respect, low-latency energy-efficient in-memory computing can be achieved by exploiting emerging memristive devices, given their ability to emulate synaptic plasticity, which provides a path to design large-scale brain-inspired spiking neural networks (SNNs). Several plasticity rules have been described in the brain and their coexistence in the same network largely expands the computational capabilities of a given circuit. In this work, starting from the electrical characterization and modeling of the memristor device, we propose a neuro-synaptic architecture that co-integrates in a unique platform with a single type of synaptic device to implement two distinct learning rules, namely, the spike-timing-dependent plasticity (STDP) and the Bienenstock–Cooper–Munro (BCM). This architecture, by exploiting the aforementioned learning rules, successfully addressed two different tasks of unsupervised learning.
Article
Full-text available
Objective: In the theoretical framework of predictive coding and active inference, the brain can be viewed as instantiating a rich generative model of the world that predicts incoming sensory data while continuously updating its parameters via minimization of prediction errors. While this theory has been successfully applied to cognitive processes - by modelling the activity of functional neural networks at a mesoscopic scale - the validity of the approach when modelling neurons as an ensemble of inferring agents, in a biologically plausible architecture, remained to be explored. Approach: We modelled a simplified cerebellar circuit with individual neurons acting as Bayesian agents to simulate the classical delayed eyeblink conditioning protocol. Neurons and synapses adjusted their activity to minimize their prediction error, which was used as the network cost function. This cerebellar network was then implemented in hardware by replicating digital neuronal elements via a low-power microcontroller. Main results: Persistent changes of synaptic strength - that mirrored neurophysiological observations - emerged via local (neurocentric) prediction error minimization, leading to the expression of associative learning. The same paradigm was effectively emulated in low-power hardware showing remarkably efficient performance compared to conventional neuromorphic architectures. Significance: These findings show that: i) an ensemble of free energy minimizing neurons - organized in a biological plausible architecture - can recapitulate functional self-organization observed in nature, such as associative plasticity, and ii) a neuromorphic network of inference units can learn unsupervised tasks without embedding predefined learning rules in the circuit, thus providing a potential avenue to a novel form of brain-inspired artificial intelligence.
Article
Full-text available
Many in-memory computing frameworks demand electronic devices with specific switching characteristics to achieve the desired level of computational complexity. Existing memristive devices cannot be reconfigured to meet the diverse volatile and non-volatile switching requirements, and hence rely on tailored material designs specific to the targeted application, limiting their universality. “Reconfigurable memristors” that combine both ionic diffusive and drift mechanisms could address these limitations, but they remain elusive. Here we present a reconfigurable halide perovskite nanocrystal memristor that achieves on-demand switching between diffusive/volatile and drift/non-volatile modes by controllable electrochemical reactions. Judicious selection of the perovskite nanocrystals and organic capping ligands enable state-of-the-art endurance performances in both modes – volatile (2 × 106 cycles) and non-volatile (5.6 × 103 cycles). We demonstrate the relevance of such proof-of-concept perovskite devices on a benchmark reservoir network with volatile recurrent and non-volatile readout layers based on 19,900 measurements across 25 dynamically-configured devices. Existing memristors cannot be reconfigured to meet the diverse switching requirements of various computing frameworks, limiting their universality. Here, the authors present a nanocrystal memristor that can be reconfigured on-demand to address these limitations
Article
Full-text available
To address the von Neumann bottleneck, artificial neural networks (ANNs) are aroused to construct neuromorphic computing systems. The artificial neuron is one of the essential components that collect the weight updating information of artificial synapses. Leaky-Integrate-and-Fire (LIF) neuron mimicking the cell membrane of biological neurons is a promising neural model due to its simplicity. To adjust the performances of artificial neurons, multiple resistors with different resistive values need to be integrated into the circuit. Whereas more components mean higher manufacturing costs, more complex circuits, and more complicated control systems. In this work, the first adjustable LIF neuron was developed, which can further simplify the circuits. To achieve adjustable fashions, a memristor-coupled capacitor with binary intrinsic resistant states was employed to integrate input signals. The intrinsic tunable resistance can modify the charge leaking rate, which determines the neural spiking features. Another contribution of this work is to overcome the hinder of credible circuit design using novel memristor-coupled capacitors with entangled capacitive and memristive effects. The genetic algorithm (GA) was utilized to detach the entanglement of memristive and capacitive effects, which is crucial for circuit design. This method can be generalized to other entangled physical behaviors, facilitating the development of novel circuits. The results will not only strengthen neuromorphic computing capability but also provides a methodology to mathematically decode electronic devices with entangled physical behaviors for novel circuits.
Article
Herein, a cobalt borate (CoBi) based synaptic device (nano-memristor) was fabricated via solution process electrochemical deposition technique, in which equally spaced nanocrystalline cobalt oxide particles were embedded in an amorphous borate (B-O) mesh. The synaptic properties across the fabricated film were investigated with the help of conductive mode atomic force microscopy (CAFM). The structural and chemical analysis of the prepared synaptic device revealed that the presence of ultrathin (≤ 2 nm) interstitial amorphous mesh of B-O is critical to introducing the reproducible analog switching characteristics caused by the gradual formation and dissolution of thermodynamically unstable filament at the confined sub-nanometer scale. The prepared device is analyzed by device flux, device charge, and charge-flux relation, confirming CoBi as an emerging material for neuromorphic computing and emulation of Hebbian learning rules. Hence, the optimized pulse stimuli were used to emulate the brain functions like spike rate-dependent plasticity, spike time-dependent plasticity, and learning and forgetting characteristics in the device. The CoBi synaptic device with the optimized film thickness of 100 nm showed analog switching characteristics with low energy consumption of 42 fj and the current in the range of ∼pA at the applied voltage sweeps of ±3.0 V. From the potentiation and depression characteristics, the nonlinearity factor (NL) for long-term potentiation (LTP) and long-term depression (LTD) are calculated as 3.15 and 3.25, respectively indicating the device's high accuracy performance. This work opens up a new avenue to engineer low-power and cost-effective nanoscale memristors to mimic brain functions.
Article
Resistive Random Access Memory (RRAM) technologies are a promising candidate for the development of more energy efficient circuits, for computing, security, and storage applications. However, such devices show stochastic behaviours that not only originate from variations introduced during fabrication, but that are intrinsic to their operation. Specifically, cycle-to-cycle variations cause the programmed resistive state to be randomly distributed, while Random Telegraph Noise (RTN) introduces random current fluctuations over time. These phenomena can easily affect the reliability and performance of RRAM-based circuits. Therefore, designing such circuits requires accurate compact models. Although several RRAM compact models have been proposed in the literature, these are rarely implemented following the programming best-practice for improving the simulator convergence, and a compact model that is able to reproduce the device characteristic including thermal effects, RTN, and variability in multiple operating conditions using a single set of parameters is still missing. Also, only a few works in the literature describe the procedure to calibrate such compact models, and even fewer address the calibration of the variability on experimental data. In this work, we extend the UniMORE RRAM physics-based compact model by developing and validating two variability models, (i) a comprehensive variability model which can reproduce the effect of cycle-to-cycle variability in multiple operating conditions, and (ii) a simplified version that requires fewer calibration data and enables to reproduce cycle-to-cycle variations in specific operating conditions. The model is implemented following Verilog-A programming best-practices and validated on data from three RRAM technologies from the literature and experimentally on TiN/Ti/HfOx/TiN devices, and the relation between experimental data and the variability model parameters is described.
Article
This paper proposes CMOS synapse and neuron for use in spiking neural networks to perform cognitive functions in a bio-inspired manner. The proposed synapse can trace the eligibility of the timing relationship between pre- and post-synaptic spikes, supporting a bio-plausible local learning rule called the spike timing-dependent plasticity (STDP) in an energy- and area-efficient manner. The proposed neuron can support neural functions such as synaptic current integration, threshold-based firing, neuronal leaking, membrane potential resetting, and adjustable refractory period with improved energy and area efficiency. The STDP curve shape of the synapse and the firing rate of the neuron can be adjusted as desired. Their variability due to process, voltage, and temperature (PVT) variations can also be minimized. The proposed CMOS neuron and synapse circuits were designed in a 28-nm CMOS process. The performance evaluation results indicate that the proposed synapse reduces energy consumption and area by up to 94% and 43% compared to conventional CMOS synapses. They also indicate that the proposed neuron achieves energy and area reductions of 37% and 23%, respectively, compared to conventional CMOS neurons. An associative neural network composed of the proposed neuron and synapse was designed to verify that they together work well for performing a cognitive function of associative learning and inferencing.
Article
Memristive devices, which combine a resistor with memory functions such that voltage pulses can change their resistance (and hence their memory state) in a nonvolatile manner, are beginning to be implemented in integrated circuits for memory applications. However, memristive devices could have applications in many other technologies, such as non-von Neumann in-memory computing in crossbar arrays, random number generation for data security, and radio-frequency switches for mobile communications. Progress toward the integration of memristive devices in commercial solid-state electronic circuits and other potential applications will depend on performance and reliability challenges that still need to be addressed, as described here.
Article
Research on electronic devices and materials is currently driven by both the slowing down of transistor scaling and the exponential growth of computing needs, which make present digital computing increasingly capacity-limited and power-limited. A promising alternative approach consists in performing computing based on intrinsic device dynamics, such that each device functionally replaces elaborate digital circuits, leading to adaptive ‘complex computing’. Memristors are a class of devices that naturally embody higher-order dynamics through their internal electrophysical processes. In this Review, we discuss how novel material properties enable complex dynamics and define different orders of complexity in memristor devices and systems. These native complex dynamics at the device level enable new computing architectures, such as brain-inspired neuromorphic systems, which offer both high energy efficiency and high computing capacity. Memristors are devices that possess materials-level complex dynamics that can be used for computing, such that each memristor can functionally replace elaborate digital circuits. This Review surveys novel material properties that enable complex dynamics and new computing architectures that offer dramatically greater computing efficiency than conventional computers.